00:00:00.000 Started by upstream project "autotest-per-patch" build number 122897 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.143 Using shallow fetch with depth 1 00:00:00.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.143 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.177 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.444 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.455 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.466 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.466 > git config core.sparsecheckout # timeout=10 00:00:04.475 > git read-tree -mu HEAD # timeout=10 00:00:04.491 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.507 Commit message: "inventory/dev: add missing long names" 00:00:04.507 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.587 [Pipeline] Start of Pipeline 00:00:04.599 [Pipeline] library 00:00:04.601 Loading library shm_lib@master 00:00:04.601 Library shm_lib@master is cached. Copying from home. 00:00:04.614 [Pipeline] node 00:00:19.615 Still waiting to schedule task 00:00:19.615 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:38.061 Running on VM-host-SM16 in /var/jenkins/workspace/centos7-vg-autotest 00:12:38.062 [Pipeline] { 00:12:38.074 [Pipeline] catchError 00:12:38.075 [Pipeline] { 00:12:38.090 [Pipeline] wrap 00:12:38.097 [Pipeline] { 00:12:38.102 [Pipeline] stage 00:12:38.103 [Pipeline] { (Prologue) 00:12:38.118 [Pipeline] echo 00:12:38.119 Node: VM-host-SM16 00:12:38.122 [Pipeline] cleanWs 00:12:38.129 [WS-CLEANUP] Deleting project workspace... 00:12:38.129 [WS-CLEANUP] Deferred wipeout is used... 00:12:38.134 [WS-CLEANUP] done 00:12:38.297 [Pipeline] setCustomBuildProperty 00:12:38.367 [Pipeline] nodesByLabel 00:12:38.368 Found a total of 1 nodes with the 'sorcerer' label 00:12:38.375 [Pipeline] httpRequest 00:12:38.378 HttpMethod: GET 00:12:38.379 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:12:38.381 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:12:38.382 Response Code: HTTP/1.1 200 OK 00:12:38.383 Success: Status code 200 is in the accepted range: 200,404 00:12:38.383 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:12:38.522 [Pipeline] sh 00:12:38.800 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:12:38.819 [Pipeline] httpRequest 00:12:38.823 HttpMethod: GET 00:12:38.824 URL: http://10.211.164.101/packages/spdk_b7a2519d9a8bfd4d92b109ad21c121341f8d5e38.tar.gz 00:12:38.825 Sending request to url: http://10.211.164.101/packages/spdk_b7a2519d9a8bfd4d92b109ad21c121341f8d5e38.tar.gz 00:12:38.826 Response Code: HTTP/1.1 200 OK 00:12:38.827 Success: Status code 200 is in the accepted range: 200,404 00:12:38.827 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/spdk_b7a2519d9a8bfd4d92b109ad21c121341f8d5e38.tar.gz 00:12:40.957 [Pipeline] sh 00:12:41.236 + tar --no-same-owner -xf spdk_b7a2519d9a8bfd4d92b109ad21c121341f8d5e38.tar.gz 00:12:44.525 [Pipeline] sh 00:12:44.803 + git -C spdk log --oneline -n5 00:12:44.803 b7a2519d9 python/rpc: Unify parameters in all calls bdev.py 00:12:44.803 3389cbfa6 python/rpc: Fix mismatches in bdev rpc docs 00:12:44.803 b68ae4fb9 nvmf-tcp: Added queue depth tracing support 00:12:44.803 46d7b94f0 nvmf-rdma: Added queue depth tracing support 00:12:44.803 0127345c8 nvme-tcp: Added queue depth tracing support 00:12:44.816 [Pipeline] writeFile 00:12:44.830 [Pipeline] sh 00:12:45.102 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:45.112 [Pipeline] sh 00:12:45.441 + cat autorun-spdk.conf 00:12:45.441 SPDK_TEST_UNITTEST=1 00:12:45.441 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:45.441 SPDK_TEST_BLOCKDEV=1 00:12:45.441 SPDK_TEST_DAOS=1 00:12:45.441 SPDK_RUN_ASAN=1 00:12:45.441 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:45.446 RUN_NIGHTLY=0 00:12:45.450 [Pipeline] } 00:12:45.467 [Pipeline] // stage 00:12:45.481 [Pipeline] stage 00:12:45.483 [Pipeline] { (Run VM) 00:12:45.499 [Pipeline] sh 00:12:45.777 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:45.777 + echo 'Start stage prepare_nvme.sh' 00:12:45.777 Start stage prepare_nvme.sh 00:12:45.777 + [[ -n 5 ]] 00:12:45.777 + disk_prefix=ex5 00:12:45.777 + [[ -n /var/jenkins/workspace/centos7-vg-autotest ]] 00:12:45.777 + [[ -e /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf ]] 00:12:45.777 + source /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf 00:12:45.777 ++ SPDK_TEST_UNITTEST=1 00:12:45.777 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:45.777 ++ SPDK_TEST_BLOCKDEV=1 00:12:45.777 ++ SPDK_TEST_DAOS=1 00:12:45.777 ++ SPDK_RUN_ASAN=1 00:12:45.777 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:45.777 ++ RUN_NIGHTLY=0 00:12:45.777 + cd /var/jenkins/workspace/centos7-vg-autotest 00:12:45.777 + nvme_files=() 00:12:45.777 + declare -A nvme_files 00:12:45.777 + backend_dir=/var/lib/libvirt/images/backends 00:12:45.777 + nvme_files['nvme.img']=5G 00:12:45.777 + nvme_files['nvme-cmb.img']=5G 00:12:45.777 + nvme_files['nvme-multi0.img']=4G 00:12:45.777 + nvme_files['nvme-multi1.img']=4G 00:12:45.777 + nvme_files['nvme-multi2.img']=4G 00:12:45.777 + nvme_files['nvme-openstack.img']=8G 00:12:45.777 + nvme_files['nvme-zns.img']=5G 00:12:45.777 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:45.777 + (( SPDK_TEST_FTL == 1 )) 00:12:45.777 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:45.777 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:45.777 + for nvme in "${!nvme_files[@]}" 00:12:45.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:12:45.777 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:45.777 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:12:45.777 + echo 'End stage prepare_nvme.sh' 00:12:45.777 End stage prepare_nvme.sh 00:12:45.789 [Pipeline] sh 00:12:46.067 + DISTRO=centos7 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:46.067 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f centos7 00:12:46.067 00:12:46.067 DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant 00:12:46.067 SPDK_DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk 00:12:46.067 VAGRANT_TARGET=/var/jenkins/workspace/centos7-vg-autotest 00:12:46.067 HELP=0 00:12:46.067 DRY_RUN=0 00:12:46.067 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:12:46.067 NVME_DISKS_TYPE=nvme, 00:12:46.067 NVME_AUTO_CREATE=0 00:12:46.067 NVME_DISKS_NAMESPACES=, 00:12:46.067 NVME_CMB=, 00:12:46.067 NVME_PMR=, 00:12:46.067 NVME_ZNS=, 00:12:46.067 NVME_MS=, 00:12:46.067 NVME_FDP=, 00:12:46.067 SPDK_VAGRANT_DISTRO=centos7 00:12:46.067 SPDK_VAGRANT_VMCPU=10 00:12:46.067 SPDK_VAGRANT_VMRAM=12288 00:12:46.067 SPDK_VAGRANT_PROVIDER=libvirt 00:12:46.067 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:46.067 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:46.067 SPDK_OPENSTACK_NETWORK=0 00:12:46.067 VAGRANT_PACKAGE_BOX=0 00:12:46.067 VAGRANTFILE=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:46.067 FORCE_DISTRO=true 00:12:46.067 VAGRANT_BOX_VERSION= 00:12:46.067 EXTRA_VAGRANTFILES= 00:12:46.067 NIC_MODEL=e1000 00:12:46.067 00:12:46.067 mkdir: created directory '/var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt' 00:12:46.067 /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt /var/jenkins/workspace/centos7-vg-autotest 00:12:49.353 Bringing machine 'default' up with 'libvirt' provider... 00:12:49.918 ==> default: Creating image (snapshot of base box volume). 00:12:50.175 ==> default: Creating domain with the following settings... 00:12:50.175 ==> default: -- Name: centos7-7.8.2003-1711172311-2200_default_1715771168_8b58cce97256ba1943c7 00:12:50.175 ==> default: -- Domain type: kvm 00:12:50.175 ==> default: -- Cpus: 10 00:12:50.175 ==> default: -- Feature: acpi 00:12:50.175 ==> default: -- Feature: apic 00:12:50.175 ==> default: -- Feature: pae 00:12:50.175 ==> default: -- Memory: 12288M 00:12:50.176 ==> default: -- Memory Backing: hugepages: 00:12:50.176 ==> default: -- Management MAC: 00:12:50.176 ==> default: -- Loader: 00:12:50.176 ==> default: -- Nvram: 00:12:50.176 ==> default: -- Base box: spdk/centos7 00:12:50.176 ==> default: -- Storage pool: default 00:12:50.176 ==> default: -- Image: /var/lib/libvirt/images/centos7-7.8.2003-1711172311-2200_default_1715771168_8b58cce97256ba1943c7.img (20G) 00:12:50.176 ==> default: -- Volume Cache: default 00:12:50.176 ==> default: -- Kernel: 00:12:50.176 ==> default: -- Initrd: 00:12:50.176 ==> default: -- Graphics Type: vnc 00:12:50.176 ==> default: -- Graphics Port: -1 00:12:50.176 ==> default: -- Graphics IP: 127.0.0.1 00:12:50.176 ==> default: -- Graphics Password: Not defined 00:12:50.176 ==> default: -- Video Type: cirrus 00:12:50.176 ==> default: -- Video VRAM: 9216 00:12:50.176 ==> default: -- Sound Type: 00:12:50.176 ==> default: -- Keymap: en-us 00:12:50.176 ==> default: -- TPM Path: 00:12:50.176 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:50.176 ==> default: -- Command line args: 00:12:50.176 ==> default: -> value=-device, 00:12:50.176 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:50.176 ==> default: -> value=-drive, 00:12:50.176 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:12:50.176 ==> default: -> value=-device, 00:12:50.176 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:50.434 ==> default: Creating shared folders metadata... 00:12:50.434 ==> default: Starting domain. 00:12:51.851 ==> default: Waiting for domain to get an IP address... 00:13:01.816 ==> default: Waiting for SSH to become available... 00:13:06.000 ==> default: Configuring and enabling network interfaces... 00:13:10.183 default: SSH address: 192.168.121.94:22 00:13:10.183 default: SSH username: vagrant 00:13:10.183 default: SSH auth method: private key 00:13:11.117 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:21.096 ==> default: Mounting SSHFS shared folder... 00:13:21.354 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output => /home/vagrant/spdk_repo/output 00:13:21.354 ==> default: Checking Mount.. 00:13:21.921 ==> default: Folder Successfully Mounted! 00:13:21.921 ==> default: Running provisioner: file... 00:13:22.486 default: ~/.gitconfig => .gitconfig 00:13:22.486 00:13:22.486 SUCCESS! 00:13:22.486 00:13:22.486 cd to /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt and type "vagrant ssh" to use. 00:13:22.486 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:22.486 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt" to destroy all trace of vm. 00:13:22.486 00:13:22.495 [Pipeline] } 00:13:22.512 [Pipeline] // stage 00:13:22.520 [Pipeline] dir 00:13:22.521 Running in /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt 00:13:22.523 [Pipeline] { 00:13:22.537 [Pipeline] catchError 00:13:22.538 [Pipeline] { 00:13:22.552 [Pipeline] sh 00:13:22.887 + vagrant ssh-config --host vagrant 00:13:22.887 + sed -ne /^Host/,$p 00:13:22.887 + tee ssh_conf 00:13:26.171 Host vagrant 00:13:26.171 HostName 192.168.121.94 00:13:26.171 User vagrant 00:13:26.171 Port 22 00:13:26.171 UserKnownHostsFile /dev/null 00:13:26.171 StrictHostKeyChecking no 00:13:26.171 PasswordAuthentication no 00:13:26.171 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-centos7/7.8.2003-1711172311-2200/libvirt/centos7 00:13:26.171 IdentitiesOnly yes 00:13:26.171 LogLevel FATAL 00:13:26.171 ForwardAgent yes 00:13:26.171 ForwardX11 yes 00:13:26.171 00:13:26.183 [Pipeline] withEnv 00:13:26.185 [Pipeline] { 00:13:26.200 [Pipeline] sh 00:13:26.480 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:13:26.480 source /etc/os-release 00:13:26.480 [[ -e /image.version ]] && img=$(< /image.version) 00:13:26.480 # Minimal, systemd-like check. 00:13:26.480 if [[ -e /.dockerenv ]]; then 00:13:26.480 # Clear garbage from the node's name: 00:13:26.480 # agt-er_autotest_547-896 -> autotest_547-896 00:13:26.480 # $HOSTNAME is the actual container id 00:13:26.480 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:26.480 if mountpoint -q /etc/hostname; then 00:13:26.480 # We can assume this is a mount from a host where container is running, 00:13:26.480 # so fetch its hostname to easily identify the target swarm worker. 00:13:26.480 container="$(< /etc/hostname) ($agent)" 00:13:26.480 else 00:13:26.480 # Fallback 00:13:26.480 container=$agent 00:13:26.480 fi 00:13:26.480 fi 00:13:26.480 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:26.480 00:13:26.569 [Pipeline] } 00:13:26.590 [Pipeline] // withEnv 00:13:26.598 [Pipeline] setCustomBuildProperty 00:13:26.613 [Pipeline] stage 00:13:26.614 [Pipeline] { (Tests) 00:13:26.634 [Pipeline] sh 00:13:26.914 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:26.928 [Pipeline] timeout 00:13:26.928 Timeout set to expire in 1 hr 0 min 00:13:26.930 [Pipeline] { 00:13:26.947 [Pipeline] sh 00:13:27.226 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:13:27.792 HEAD is now at b7a2519d9 python/rpc: Unify parameters in all calls bdev.py 00:13:27.804 [Pipeline] sh 00:13:28.082 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:13:28.082 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:13:28.099 [Pipeline] sh 00:13:28.378 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:28.391 [Pipeline] sh 00:13:28.666 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:13:28.666 ++ readlink -f spdk_repo 00:13:28.666 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:28.666 + [[ -n /home/vagrant/spdk_repo ]] 00:13:28.666 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:28.666 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:28.666 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:28.666 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:28.666 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:28.666 + cd /home/vagrant/spdk_repo 00:13:28.666 + source /etc/os-release 00:13:28.666 ++ NAME='CentOS Linux' 00:13:28.666 ++ VERSION='7 (Core)' 00:13:28.666 ++ ID=centos 00:13:28.666 ++ ID_LIKE='rhel fedora' 00:13:28.666 ++ VERSION_ID=7 00:13:28.666 ++ PRETTY_NAME='CentOS Linux 7 (Core)' 00:13:28.666 ++ ANSI_COLOR='0;31' 00:13:28.666 ++ CPE_NAME=cpe:/o:centos:centos:7 00:13:28.666 ++ HOME_URL=https://www.centos.org/ 00:13:28.666 ++ BUG_REPORT_URL=https://bugs.centos.org/ 00:13:28.666 ++ CENTOS_MANTISBT_PROJECT=CentOS-7 00:13:28.666 ++ CENTOS_MANTISBT_PROJECT_VERSION=7 00:13:28.666 ++ REDHAT_SUPPORT_PRODUCT=centos 00:13:28.666 ++ REDHAT_SUPPORT_PRODUCT_VERSION=7 00:13:28.666 + uname -a 00:13:28.666 Linux centos7-cloud-1711172311-2200 3.10.0-1160.114.2.el7.x86_64 #1 SMP Wed Mar 20 15:54:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:13:28.666 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:28.666 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:13:28.666 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:13:28.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:13:28.923 Hugepages 00:13:28.923 node hugesize free / total 00:13:28.923 node0 1048576kB 0 / 0 00:13:28.923 node0 2048kB 0 / 0 00:13:28.923 00:13:28.923 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:28.923 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:28.923 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:13:28.923 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:13:28.923 + rm -f /tmp/spdk-ld-path 00:13:28.923 + source autorun-spdk.conf 00:13:28.923 ++ SPDK_TEST_UNITTEST=1 00:13:28.923 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:28.923 ++ SPDK_TEST_BLOCKDEV=1 00:13:28.923 ++ SPDK_TEST_DAOS=1 00:13:28.923 ++ SPDK_RUN_ASAN=1 00:13:28.923 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:28.923 ++ RUN_NIGHTLY=0 00:13:28.923 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:28.923 + [[ -n '' ]] 00:13:28.923 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:28.923 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:13:28.923 + for M in /var/spdk/build-*-manifest.txt 00:13:28.923 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:28.923 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:29.182 + for M in /var/spdk/build-*-manifest.txt 00:13:29.182 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:29.182 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:29.182 ++ uname 00:13:29.182 + [[ Linux == \L\i\n\u\x ]] 00:13:29.182 + sudo dmesg -T 00:13:29.182 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:13:29.182 + sudo dmesg --clear 00:13:29.182 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:13:29.182 + dmesg_pid=2827 00:13:29.182 + sudo dmesg -Tw 00:13:29.182 + [[ CentOS Linux == FreeBSD ]] 00:13:29.182 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.182 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.182 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:29.182 + [[ -x /usr/src/fio-static/fio ]] 00:13:29.182 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:29.182 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:29.182 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:29.182 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:13:29.182 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:13:29.182 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:13:29.182 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:29.182 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:29.182 Test configuration: 00:13:29.182 SPDK_TEST_UNITTEST=1 00:13:29.182 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:29.182 SPDK_TEST_BLOCKDEV=1 00:13:29.182 SPDK_TEST_DAOS=1 00:13:29.182 SPDK_RUN_ASAN=1 00:13:29.182 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:29.182 RUN_NIGHTLY=0 11:06:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.182 11:06:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:29.182 11:06:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.182 11:06:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.182 11:06:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:13:29.182 11:06:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:13:29.182 11:06:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:13:29.182 11:06:47 -- paths/export.sh@5 -- $ export PATH 00:13:29.182 11:06:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:13:29.182 11:06:47 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:29.182 11:06:47 -- common/autobuild_common.sh@437 -- $ date +%s 00:13:29.182 11:06:47 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715771207.XXXXXX 00:13:29.182 11:06:47 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715771207.cQpmAw 00:13:29.182 11:06:47 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:13:29.182 11:06:47 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:13:29.182 11:06:47 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:29.182 11:06:47 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:29.182 11:06:47 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:29.182 11:06:47 -- common/autobuild_common.sh@453 -- $ get_config_params 00:13:29.182 11:06:47 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:13:29.182 11:06:47 -- common/autotest_common.sh@10 -- $ set +x 00:13:29.182 11:06:47 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:13:29.182 11:06:47 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:13:29.182 11:06:47 -- pm/common@17 -- $ local monitor 00:13:29.182 11:06:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:29.182 11:06:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:29.182 11:06:47 -- pm/common@25 -- $ sleep 1 00:13:29.182 11:06:47 -- pm/common@21 -- $ date +%s 00:13:29.182 11:06:47 -- pm/common@21 -- $ date +%s 00:13:29.182 11:06:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715771207 00:13:29.182 11:06:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715771207 00:13:29.182 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715771207_collect-vmstat.pm.log 00:13:29.182 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715771207_collect-cpu-load.pm.log 00:13:30.115 11:06:48 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:13:30.115 11:06:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:30.115 11:06:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:30.115 11:06:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:30.115 11:06:48 -- spdk/autobuild.sh@16 -- $ date -u 00:13:30.115 Wed May 15 11:06:48 UTC 2024 00:13:30.115 11:06:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:30.373 v24.05-pre-612-gb7a2519d9 00:13:30.373 11:06:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:13:30.373 11:06:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:13:30.373 11:06:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:13:30.373 11:06:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:13:30.374 11:06:48 -- common/autotest_common.sh@10 -- $ set +x 00:13:30.374 ************************************ 00:13:30.374 START TEST asan 00:13:30.374 ************************************ 00:13:30.374 using asan 00:13:30.374 ************************************ 00:13:30.374 END TEST asan 00:13:30.374 ************************************ 00:13:30.374 11:06:48 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:13:30.374 00:13:30.374 real 0m0.000s 00:13:30.374 user 0m0.000s 00:13:30.374 sys 0m0.000s 00:13:30.374 11:06:48 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:13:30.374 11:06:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:13:30.374 11:06:48 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:13:30.374 11:06:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:30.374 11:06:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:30.374 11:06:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:30.374 11:06:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:30.374 11:06:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:30.374 11:06:48 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:13:30.374 11:06:48 -- spdk/autobuild.sh@58 -- $ unittest_build 00:13:30.374 11:06:48 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:13:30.374 11:06:48 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:13:30.374 11:06:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:13:30.374 11:06:48 -- common/autotest_common.sh@10 -- $ set +x 00:13:30.374 ************************************ 00:13:30.374 START TEST unittest_build 00:13:30.374 ************************************ 00:13:30.374 11:06:48 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:13:30.374 11:06:48 unittest_build -- common/autobuild_common.sh@404 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --without-shared 00:13:30.374 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:30.374 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:30.649 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:13:30.649 Using 'verbs' RDMA provider 00:13:31.234 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:13:31.234 Without ISA-L, there is no software support for crypto or compression, 00:13:31.234 so these features will be disabled. 00:13:31.492 Creating mk/config.mk...done. 00:13:31.492 Creating mk/cc.flags.mk...done. 00:13:31.492 Type 'make' to build. 00:13:31.492 11:06:49 unittest_build -- common/autobuild_common.sh@405 -- $ make -j10 00:13:31.750 make[1]: Nothing to be done for 'all'. 00:13:35.931 The Meson build system 00:13:35.931 Version: 0.61.5 00:13:35.931 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:35.931 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:35.931 Build type: native build 00:13:35.931 Program cat found: YES (/bin/cat) 00:13:35.931 Project name: DPDK 00:13:35.931 Project version: 23.11.0 00:13:35.931 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:13:35.931 C linker for the host machine: cc ld.bfd 2.35-5 00:13:35.931 Host machine cpu family: x86_64 00:13:35.931 Host machine cpu: x86_64 00:13:35.931 Message: ## Building in Developer Mode ## 00:13:35.931 Program pkg-config found: YES (/bin/pkg-config) 00:13:35.931 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:35.931 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:35.931 Program python3 found: YES (/usr/bin/python3) 00:13:35.931 Program cat found: YES (/bin/cat) 00:13:35.931 Compiler for C supports arguments -march=native: YES 00:13:35.931 Checking for size of "void *" : 8 00:13:35.931 Checking for size of "void *" : 8 00:13:35.931 Library m found: YES 00:13:35.931 Library numa found: YES 00:13:35.931 Has header "numaif.h" : YES 00:13:35.931 Library fdt found: NO 00:13:35.931 Library execinfo found: NO 00:13:35.931 Has header "execinfo.h" : YES 00:13:35.931 Found pkg-config: /bin/pkg-config (0.27.1) 00:13:35.931 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:35.931 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:35.931 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:35.931 Run-time dependency openssl found: YES 1.0.2k 00:13:35.931 Run-time dependency libpcap found: NO (tried pkgconfig) 00:13:35.931 Library pcap found: NO 00:13:35.931 Compiler for C supports arguments -Wcast-qual: YES 00:13:35.931 Compiler for C supports arguments -Wdeprecated: YES 00:13:35.931 Compiler for C supports arguments -Wformat: YES 00:13:35.931 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:35.931 Compiler for C supports arguments -Wformat-security: NO 00:13:35.931 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:35.931 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:35.931 Compiler for C supports arguments -Wnested-externs: YES 00:13:35.931 Compiler for C supports arguments -Wold-style-definition: YES 00:13:35.931 Compiler for C supports arguments -Wpointer-arith: YES 00:13:35.931 Compiler for C supports arguments -Wsign-compare: YES 00:13:35.931 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:35.931 Compiler for C supports arguments -Wundef: YES 00:13:35.931 Compiler for C supports arguments -Wwrite-strings: YES 00:13:35.931 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:35.931 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:35.931 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:35.931 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:35.931 Program objdump found: YES (/bin/objdump) 00:13:35.931 Compiler for C supports arguments -mavx512f: YES 00:13:35.931 Checking if "AVX512 checking" compiles: YES 00:13:35.931 Fetching value of define "__SSE4_2__" : 1 00:13:35.931 Fetching value of define "__AES__" : 1 00:13:35.931 Fetching value of define "__AVX__" : 1 00:13:35.931 Fetching value of define "__AVX2__" : 1 00:13:35.931 Fetching value of define "__AVX512BW__" : 00:13:35.931 Fetching value of define "__AVX512CD__" : 00:13:35.931 Fetching value of define "__AVX512DQ__" : 00:13:35.931 Fetching value of define "__AVX512F__" : 00:13:35.931 Fetching value of define "__AVX512VL__" : 00:13:35.931 Fetching value of define "__PCLMUL__" : 1 00:13:35.931 Fetching value of define "__RDRND__" : 1 00:13:35.931 Fetching value of define "__RDSEED__" : 1 00:13:35.931 Fetching value of define "__VPCLMULQDQ__" : 00:13:35.931 Fetching value of define "__znver1__" : 00:13:35.931 Fetching value of define "__znver2__" : 00:13:35.931 Fetching value of define "__znver3__" : 00:13:35.931 Fetching value of define "__znver4__" : 00:13:35.931 Library asan found: YES 00:13:35.931 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:35.931 Message: lib/log: Defining dependency "log" 00:13:35.931 Message: lib/kvargs: Defining dependency "kvargs" 00:13:35.931 Message: lib/telemetry: Defining dependency "telemetry" 00:13:35.931 Library rt found: YES 00:13:35.931 Checking for function "getentropy" : NO 00:13:35.931 Message: lib/eal: Defining dependency "eal" 00:13:35.931 Message: lib/ring: Defining dependency "ring" 00:13:35.931 Message: lib/rcu: Defining dependency "rcu" 00:13:35.931 Message: lib/mempool: Defining dependency "mempool" 00:13:35.931 Message: lib/mbuf: Defining dependency "mbuf" 00:13:35.931 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:35.931 Fetching value of define "__AVX512F__" : (cached) 00:13:35.931 Compiler for C supports arguments -mpclmul: YES 00:13:35.931 Compiler for C supports arguments -maes: YES 00:13:37.305 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:37.305 Compiler for C supports arguments -mavx512bw: YES 00:13:37.305 Compiler for C supports arguments -mavx512dq: YES 00:13:37.305 Compiler for C supports arguments -mavx512vl: YES 00:13:37.305 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:37.305 Compiler for C supports arguments -mavx2: YES 00:13:37.305 Compiler for C supports arguments -mavx: YES 00:13:37.305 Message: lib/net: Defining dependency "net" 00:13:37.305 Message: lib/meter: Defining dependency "meter" 00:13:37.305 Message: lib/ethdev: Defining dependency "ethdev" 00:13:37.305 Message: lib/pci: Defining dependency "pci" 00:13:37.305 Message: lib/cmdline: Defining dependency "cmdline" 00:13:37.305 Message: lib/hash: Defining dependency "hash" 00:13:37.305 Message: lib/timer: Defining dependency "timer" 00:13:37.305 Message: lib/compressdev: Defining dependency "compressdev" 00:13:37.305 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:37.305 Message: lib/dmadev: Defining dependency "dmadev" 00:13:37.305 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:37.305 Message: lib/power: Defining dependency "power" 00:13:37.305 Message: lib/reorder: Defining dependency "reorder" 00:13:37.305 Message: lib/security: Defining dependency "security" 00:13:37.305 Has header "linux/userfaultfd.h" : YES 00:13:37.305 Has header "linux/vduse.h" : NO 00:13:37.305 Message: lib/vhost: Defining dependency "vhost" 00:13:37.305 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:37.305 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:37.305 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:37.305 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:37.305 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:37.305 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:37.305 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:37.305 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:37.305 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:37.305 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:37.305 Program doxygen found: YES (/bin/doxygen) 00:13:37.305 Configuring doxy-api-html.conf using configuration 00:13:37.305 Configuring doxy-api-man.conf using configuration 00:13:37.305 Program mandb found: YES (/bin/mandb) 00:13:37.305 Program sphinx-build found: NO 00:13:37.305 Configuring rte_build_config.h using configuration 00:13:37.305 Message: 00:13:37.305 ================= 00:13:37.305 Applications Enabled 00:13:37.305 ================= 00:13:37.305 00:13:37.305 apps: 00:13:37.305 00:13:37.305 00:13:37.305 Message: 00:13:37.305 ================= 00:13:37.305 Libraries Enabled 00:13:37.305 ================= 00:13:37.305 00:13:37.305 libs: 00:13:37.305 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:37.305 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:37.305 cryptodev, dmadev, power, reorder, security, vhost, 00:13:37.305 00:13:37.305 Message: 00:13:37.305 =============== 00:13:37.305 Drivers Enabled 00:13:37.305 =============== 00:13:37.305 00:13:37.305 common: 00:13:37.305 00:13:37.305 bus: 00:13:37.305 pci, vdev, 00:13:37.305 mempool: 00:13:37.305 ring, 00:13:37.305 dma: 00:13:37.305 00:13:37.305 net: 00:13:37.305 00:13:37.305 crypto: 00:13:37.305 00:13:37.305 compress: 00:13:37.305 00:13:37.305 vdpa: 00:13:37.305 00:13:37.305 00:13:37.305 Message: 00:13:37.305 ================= 00:13:37.305 Content Skipped 00:13:37.305 ================= 00:13:37.305 00:13:37.305 apps: 00:13:37.305 dumpcap: explicitly disabled via build config 00:13:37.305 graph: explicitly disabled via build config 00:13:37.305 pdump: explicitly disabled via build config 00:13:37.305 proc-info: explicitly disabled via build config 00:13:37.305 test-acl: explicitly disabled via build config 00:13:37.305 test-bbdev: explicitly disabled via build config 00:13:37.305 test-cmdline: explicitly disabled via build config 00:13:37.305 test-compress-perf: explicitly disabled via build config 00:13:37.305 test-crypto-perf: explicitly disabled via build config 00:13:37.305 test-dma-perf: explicitly disabled via build config 00:13:37.305 test-eventdev: explicitly disabled via build config 00:13:37.305 test-fib: explicitly disabled via build config 00:13:37.305 test-flow-perf: explicitly disabled via build config 00:13:37.305 test-gpudev: explicitly disabled via build config 00:13:37.305 test-mldev: explicitly disabled via build config 00:13:37.305 test-pipeline: explicitly disabled via build config 00:13:37.305 test-pmd: explicitly disabled via build config 00:13:37.305 test-regex: explicitly disabled via build config 00:13:37.305 test-sad: explicitly disabled via build config 00:13:37.305 test-security-perf: explicitly disabled via build config 00:13:37.305 00:13:37.305 libs: 00:13:37.305 metrics: explicitly disabled via build config 00:13:37.305 acl: explicitly disabled via build config 00:13:37.305 bbdev: explicitly disabled via build config 00:13:37.305 bitratestats: explicitly disabled via build config 00:13:37.305 bpf: explicitly disabled via build config 00:13:37.305 cfgfile: explicitly disabled via build config 00:13:37.305 distributor: explicitly disabled via build config 00:13:37.305 efd: explicitly disabled via build config 00:13:37.305 eventdev: explicitly disabled via build config 00:13:37.305 dispatcher: explicitly disabled via build config 00:13:37.305 gpudev: explicitly disabled via build config 00:13:37.305 gro: explicitly disabled via build config 00:13:37.305 gso: explicitly disabled via build config 00:13:37.305 ip_frag: explicitly disabled via build config 00:13:37.305 jobstats: explicitly disabled via build config 00:13:37.305 latencystats: explicitly disabled via build config 00:13:37.305 lpm: explicitly disabled via build config 00:13:37.305 member: explicitly disabled via build config 00:13:37.305 pcapng: explicitly disabled via build config 00:13:37.305 rawdev: explicitly disabled via build config 00:13:37.305 regexdev: explicitly disabled via build config 00:13:37.305 mldev: explicitly disabled via build config 00:13:37.305 rib: explicitly disabled via build config 00:13:37.305 sched: explicitly disabled via build config 00:13:37.305 stack: explicitly disabled via build config 00:13:37.305 ipsec: explicitly disabled via build config 00:13:37.305 pdcp: explicitly disabled via build config 00:13:37.305 fib: explicitly disabled via build config 00:13:37.306 port: explicitly disabled via build config 00:13:37.306 pdump: explicitly disabled via build config 00:13:37.306 table: explicitly disabled via build config 00:13:37.306 pipeline: explicitly disabled via build config 00:13:37.306 graph: explicitly disabled via build config 00:13:37.306 node: explicitly disabled via build config 00:13:37.306 00:13:37.306 drivers: 00:13:37.306 common/cpt: not in enabled drivers build config 00:13:37.306 common/dpaax: not in enabled drivers build config 00:13:37.306 common/iavf: not in enabled drivers build config 00:13:37.306 common/idpf: not in enabled drivers build config 00:13:37.306 common/mvep: not in enabled drivers build config 00:13:37.306 common/octeontx: not in enabled drivers build config 00:13:37.306 bus/auxiliary: not in enabled drivers build config 00:13:37.306 bus/cdx: not in enabled drivers build config 00:13:37.306 bus/dpaa: not in enabled drivers build config 00:13:37.306 bus/fslmc: not in enabled drivers build config 00:13:37.306 bus/ifpga: not in enabled drivers build config 00:13:37.306 bus/platform: not in enabled drivers build config 00:13:37.306 bus/vmbus: not in enabled drivers build config 00:13:37.306 common/cnxk: not in enabled drivers build config 00:13:37.306 common/mlx5: not in enabled drivers build config 00:13:37.306 common/nfp: not in enabled drivers build config 00:13:37.306 common/qat: not in enabled drivers build config 00:13:37.306 common/sfc_efx: not in enabled drivers build config 00:13:37.306 mempool/bucket: not in enabled drivers build config 00:13:37.306 mempool/cnxk: not in enabled drivers build config 00:13:37.306 mempool/dpaa: not in enabled drivers build config 00:13:37.306 mempool/dpaa2: not in enabled drivers build config 00:13:37.306 mempool/octeontx: not in enabled drivers build config 00:13:37.306 mempool/stack: not in enabled drivers build config 00:13:37.306 dma/cnxk: not in enabled drivers build config 00:13:37.306 dma/dpaa: not in enabled drivers build config 00:13:37.306 dma/dpaa2: not in enabled drivers build config 00:13:37.306 dma/hisilicon: not in enabled drivers build config 00:13:37.306 dma/idxd: not in enabled drivers build config 00:13:37.306 dma/ioat: not in enabled drivers build config 00:13:37.306 dma/skeleton: not in enabled drivers build config 00:13:37.306 net/af_packet: not in enabled drivers build config 00:13:37.306 net/af_xdp: not in enabled drivers build config 00:13:37.306 net/ark: not in enabled drivers build config 00:13:37.306 net/atlantic: not in enabled drivers build config 00:13:37.306 net/avp: not in enabled drivers build config 00:13:37.306 net/axgbe: not in enabled drivers build config 00:13:37.306 net/bnx2x: not in enabled drivers build config 00:13:37.306 net/bnxt: not in enabled drivers build config 00:13:37.306 net/bonding: not in enabled drivers build config 00:13:37.306 net/cnxk: not in enabled drivers build config 00:13:37.306 net/cpfl: not in enabled drivers build config 00:13:37.306 net/cxgbe: not in enabled drivers build config 00:13:37.306 net/dpaa: not in enabled drivers build config 00:13:37.306 net/dpaa2: not in enabled drivers build config 00:13:37.306 net/e1000: not in enabled drivers build config 00:13:37.306 net/ena: not in enabled drivers build config 00:13:37.306 net/enetc: not in enabled drivers build config 00:13:37.306 net/enetfec: not in enabled drivers build config 00:13:37.306 net/enic: not in enabled drivers build config 00:13:37.306 net/failsafe: not in enabled drivers build config 00:13:37.306 net/fm10k: not in enabled drivers build config 00:13:37.306 net/gve: not in enabled drivers build config 00:13:37.306 net/hinic: not in enabled drivers build config 00:13:37.306 net/hns3: not in enabled drivers build config 00:13:37.306 net/i40e: not in enabled drivers build config 00:13:37.306 net/iavf: not in enabled drivers build config 00:13:37.306 net/ice: not in enabled drivers build config 00:13:37.306 net/idpf: not in enabled drivers build config 00:13:37.306 net/igc: not in enabled drivers build config 00:13:37.306 net/ionic: not in enabled drivers build config 00:13:37.306 net/ipn3ke: not in enabled drivers build config 00:13:37.306 net/ixgbe: not in enabled drivers build config 00:13:37.306 net/mana: not in enabled drivers build config 00:13:37.306 net/memif: not in enabled drivers build config 00:13:37.306 net/mlx4: not in enabled drivers build config 00:13:37.306 net/mlx5: not in enabled drivers build config 00:13:37.306 net/mvneta: not in enabled drivers build config 00:13:37.306 net/mvpp2: not in enabled drivers build config 00:13:37.306 net/netvsc: not in enabled drivers build config 00:13:37.306 net/nfb: not in enabled drivers build config 00:13:37.306 net/nfp: not in enabled drivers build config 00:13:37.306 net/ngbe: not in enabled drivers build config 00:13:37.306 net/null: not in enabled drivers build config 00:13:37.306 net/octeontx: not in enabled drivers build config 00:13:37.306 net/octeon_ep: not in enabled drivers build config 00:13:37.306 net/pcap: not in enabled drivers build config 00:13:37.306 net/pfe: not in enabled drivers build config 00:13:37.306 net/qede: not in enabled drivers build config 00:13:37.306 net/ring: not in enabled drivers build config 00:13:37.306 net/sfc: not in enabled drivers build config 00:13:37.306 net/softnic: not in enabled drivers build config 00:13:37.306 net/tap: not in enabled drivers build config 00:13:37.306 net/thunderx: not in enabled drivers build config 00:13:37.306 net/txgbe: not in enabled drivers build config 00:13:37.306 net/vdev_netvsc: not in enabled drivers build config 00:13:37.306 net/vhost: not in enabled drivers build config 00:13:37.306 net/virtio: not in enabled drivers build config 00:13:37.306 net/vmxnet3: not in enabled drivers build config 00:13:37.306 raw/*: missing internal dependency, "rawdev" 00:13:37.306 crypto/armv8: not in enabled drivers build config 00:13:37.306 crypto/bcmfs: not in enabled drivers build config 00:13:37.306 crypto/caam_jr: not in enabled drivers build config 00:13:37.306 crypto/ccp: not in enabled drivers build config 00:13:37.306 crypto/cnxk: not in enabled drivers build config 00:13:37.306 crypto/dpaa_sec: not in enabled drivers build config 00:13:37.306 crypto/dpaa2_sec: not in enabled drivers build config 00:13:37.306 crypto/ipsec_mb: not in enabled drivers build config 00:13:37.306 crypto/mlx5: not in enabled drivers build config 00:13:37.306 crypto/mvsam: not in enabled drivers build config 00:13:37.306 crypto/nitrox: not in enabled drivers build config 00:13:37.306 crypto/null: not in enabled drivers build config 00:13:37.306 crypto/octeontx: not in enabled drivers build config 00:13:37.306 crypto/openssl: not in enabled drivers build config 00:13:37.306 crypto/scheduler: not in enabled drivers build config 00:13:37.306 crypto/uadk: not in enabled drivers build config 00:13:37.306 crypto/virtio: not in enabled drivers build config 00:13:37.306 compress/isal: not in enabled drivers build config 00:13:37.306 compress/mlx5: not in enabled drivers build config 00:13:37.306 compress/octeontx: not in enabled drivers build config 00:13:37.306 compress/zlib: not in enabled drivers build config 00:13:37.306 regex/*: missing internal dependency, "regexdev" 00:13:37.306 ml/*: missing internal dependency, "mldev" 00:13:37.306 vdpa/ifc: not in enabled drivers build config 00:13:37.306 vdpa/mlx5: not in enabled drivers build config 00:13:37.306 vdpa/nfp: not in enabled drivers build config 00:13:37.306 vdpa/sfc: not in enabled drivers build config 00:13:37.306 event/*: missing internal dependency, "eventdev" 00:13:37.306 baseband/*: missing internal dependency, "bbdev" 00:13:37.306 gpu/*: missing internal dependency, "gpudev" 00:13:37.306 00:13:37.306 00:13:37.872 Build targets in project: 85 00:13:37.872 00:13:37.872 DPDK 23.11.0 00:13:37.872 00:13:37.872 User defined options 00:13:37.872 buildtype : debug 00:13:37.872 default_library : static 00:13:37.872 libdir : lib 00:13:37.872 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:37.872 b_sanitize : address 00:13:37.872 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:13:37.872 c_link_args : 00:13:37.872 cpu_instruction_set: native 00:13:37.872 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:37.872 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:37.872 enable_docs : false 00:13:37.872 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:37.872 enable_kmods : false 00:13:37.872 tests : false 00:13:37.872 00:13:37.872 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:37.873 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:13:38.439 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:38.439 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:38.439 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:38.439 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:38.439 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:38.439 [5/264] Linking static target lib/librte_kvargs.a 00:13:38.439 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:38.439 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:38.439 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:38.439 [9/264] Linking static target lib/librte_log.a 00:13:38.697 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:38.697 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:38.697 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:38.697 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:38.697 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:38.956 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:38.956 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:38.956 [17/264] Linking static target lib/librte_telemetry.a 00:13:38.956 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:38.956 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:38.956 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:39.214 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:39.214 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:39.214 [23/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.214 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:39.214 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:39.214 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:39.214 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:39.214 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:39.472 [29/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.472 [30/264] Linking target lib/librte_log.so.24.0 00:13:39.472 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:39.472 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:39.472 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:39.472 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:39.472 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:39.472 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:39.472 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:39.472 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:39.472 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:39.472 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:39.730 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:39.730 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:39.730 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:39.730 [44/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.730 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:39.730 [46/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:39.988 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:39.988 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:39.988 [49/264] Linking target lib/librte_kvargs.so.24.0 00:13:39.988 [50/264] Linking target lib/librte_telemetry.so.24.0 00:13:39.988 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:39.988 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:39.988 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:39.988 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:39.988 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:39.988 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:39.988 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:39.988 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:39.988 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:40.247 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:40.247 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:40.247 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:40.247 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:40.247 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:40.247 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:40.247 [66/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:40.247 [67/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:40.505 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:40.505 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:40.505 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:40.505 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:40.505 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:40.505 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:40.505 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:40.505 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:40.505 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:40.505 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:40.763 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:40.763 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:40.763 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:40.763 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:40.763 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:40.763 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:41.021 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:41.021 [85/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:41.021 [86/264] Linking static target lib/librte_ring.a 00:13:41.021 [87/264] Linking static target lib/librte_eal.a 00:13:41.021 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:41.021 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:41.021 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:41.021 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:41.021 [92/264] Linking static target lib/librte_mempool.a 00:13:41.021 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:41.021 [94/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:41.021 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:41.021 [96/264] Linking static target lib/librte_rcu.a 00:13:41.279 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:41.279 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:41.279 [99/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:41.537 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:41.537 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:41.537 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:41.537 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:41.537 [104/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:41.537 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:41.795 [106/264] Linking static target lib/librte_net.a 00:13:41.795 [107/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:41.795 [108/264] Linking static target lib/librte_mbuf.a 00:13:41.795 [109/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:41.795 [110/264] Linking static target lib/librte_meter.a 00:13:41.795 [111/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.053 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:42.053 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:42.053 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:42.053 [115/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.311 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:42.311 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.311 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:42.311 [119/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.311 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:42.569 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:42.827 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:42.827 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:42.827 [124/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.827 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:42.827 [126/264] Linking static target lib/librte_pci.a 00:13:42.827 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:42.827 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:42.827 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:42.827 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:42.827 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:43.085 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:43.085 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:43.085 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:43.085 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:43.085 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:43.085 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:43.085 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:43.085 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:43.085 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:43.085 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:43.085 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:43.343 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:43.343 [144/264] Linking static target lib/librte_cmdline.a 00:13:43.343 [145/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:43.343 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:43.602 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:43.602 [148/264] Linking static target lib/librte_timer.a 00:13:43.602 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:43.602 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:43.602 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:43.602 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:43.860 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:43.860 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:43.860 [155/264] Linking static target lib/librte_compressdev.a 00:13:43.860 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:44.119 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:44.119 [158/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:44.119 [159/264] Linking static target lib/librte_hash.a 00:13:44.119 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:44.119 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:44.119 [162/264] Linking static target lib/librte_dmadev.a 00:13:44.379 [163/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.379 [164/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:44.379 [165/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:44.379 [166/264] Linking static target lib/librte_ethdev.a 00:13:44.379 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:44.379 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:44.638 [169/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.638 [170/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:44.638 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:44.896 [172/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:44.896 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:44.896 [174/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.896 [175/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.896 [176/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:44.896 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:44.896 [178/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:44.896 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:44.896 [180/264] Linking static target lib/librte_cryptodev.a 00:13:45.153 [181/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:45.153 [182/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:45.153 [183/264] Linking static target lib/librte_power.a 00:13:45.412 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:45.412 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:45.412 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:45.412 [187/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:45.412 [188/264] Linking static target lib/librte_reorder.a 00:13:45.412 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:45.412 [190/264] Linking static target lib/librte_security.a 00:13:45.979 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:46.237 [192/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:46.237 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:46.237 [194/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:46.237 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:46.237 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:46.237 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:46.495 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:46.495 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:46.495 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:46.752 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:46.752 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:46.752 [203/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:46.752 [204/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:46.752 [205/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:47.010 [206/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:47.010 [207/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:47.010 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:47.010 [209/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:47.010 [210/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:47.010 [211/264] Linking static target drivers/librte_bus_pci.a 00:13:47.010 [212/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:47.268 [213/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:47.268 [214/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:47.268 [215/264] Linking static target drivers/librte_bus_vdev.a 00:13:47.268 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:47.268 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:47.268 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:47.534 [219/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:47.534 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:47.534 [221/264] Linking static target drivers/librte_mempool_ring.a 00:13:47.793 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:47.793 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:47.793 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:48.070 [225/264] Linking target lib/librte_eal.so.24.0 00:13:48.369 [226/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:13:48.369 [227/264] Linking target lib/librte_ring.so.24.0 00:13:48.369 [228/264] Linking target lib/librte_meter.so.24.0 00:13:48.369 [229/264] Linking target lib/librte_pci.so.24.0 00:13:48.369 [230/264] Linking target lib/librte_timer.so.24.0 00:13:48.369 [231/264] Linking target lib/librte_dmadev.so.24.0 00:13:48.369 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:13:48.639 [233/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:48.904 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:13:48.904 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:13:48.904 [236/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:13:48.904 [237/264] Linking target drivers/librte_bus_pci.so.24.0 00:13:48.904 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:13:48.904 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:13:49.162 [240/264] Linking target lib/librte_rcu.so.24.0 00:13:49.162 [241/264] Linking target lib/librte_mempool.so.24.0 00:13:49.434 [242/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:49.435 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:13:49.435 [244/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:13:49.692 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:13:49.692 [246/264] Linking target lib/librte_mbuf.so.24.0 00:13:50.262 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:13:50.262 [248/264] Linking target lib/librte_net.so.24.0 00:13:50.262 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:13:50.262 [250/264] Linking target lib/librte_reorder.so.24.0 00:13:50.262 [251/264] Linking target lib/librte_compressdev.so.24.0 00:13:50.828 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:13:50.828 [253/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:13:50.828 [254/264] Linking target lib/librte_hash.so.24.0 00:13:50.828 [255/264] Linking target lib/librte_cmdline.so.24.0 00:13:50.828 [256/264] Linking target lib/librte_security.so.24.0 00:13:50.828 [257/264] Linking target lib/librte_ethdev.so.24.0 00:13:51.394 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:13:51.394 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:13:51.394 [260/264] Linking target lib/librte_power.so.24.0 00:13:52.328 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:52.328 [262/264] Linking static target lib/librte_vhost.a 00:13:54.229 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.487 [264/264] Linking target lib/librte_vhost.so.24.0 00:13:54.487 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:13:55.859 CC lib/ut_mock/mock.o 00:13:55.859 CC lib/log/log.o 00:13:55.859 CC lib/ut/ut.o 00:13:55.859 CC lib/log/log_flags.o 00:13:55.859 CC lib/log/log_deprecated.o 00:13:56.141 LIB libspdk_ut_mock.a 00:13:56.141 LIB libspdk_ut.a 00:13:56.141 LIB libspdk_log.a 00:13:56.421 CXX lib/trace_parser/trace.o 00:13:56.421 CC lib/util/base64.o 00:13:56.421 CC lib/util/bit_array.o 00:13:56.421 CC lib/dma/dma.o 00:13:56.421 CC lib/ioat/ioat.o 00:13:56.421 CC lib/util/cpuset.o 00:13:56.421 CC lib/util/crc16.o 00:13:56.421 CC lib/util/crc32.o 00:13:56.421 CC lib/util/crc32c.o 00:13:56.421 CC lib/vfio_user/host/vfio_user_pci.o 00:13:56.421 CC lib/util/crc32_ieee.o 00:13:56.421 CC lib/util/crc64.o 00:13:56.421 CC lib/vfio_user/host/vfio_user.o 00:13:56.421 LIB libspdk_dma.a 00:13:56.421 CC lib/util/dif.o 00:13:56.421 CC lib/util/fd.o 00:13:56.679 LIB libspdk_ioat.a 00:13:56.679 CC lib/util/file.o 00:13:56.679 CC lib/util/hexlify.o 00:13:56.679 CC lib/util/iov.o 00:13:56.679 CC lib/util/math.o 00:13:56.679 CC lib/util/pipe.o 00:13:56.679 LIB libspdk_vfio_user.a 00:13:56.679 CC lib/util/strerror_tls.o 00:13:56.679 CC lib/util/string.o 00:13:56.679 CC lib/util/uuid.o 00:13:56.679 CC lib/util/fd_group.o 00:13:56.937 CC lib/util/xor.o 00:13:56.937 CC lib/util/zipf.o 00:13:56.937 LIB libspdk_util.a 00:13:57.194 LIB libspdk_trace_parser.a 00:13:57.194 CC lib/idxd/idxd.o 00:13:57.194 CC lib/env_dpdk/env.o 00:13:57.194 CC lib/idxd/idxd_user.o 00:13:57.194 CC lib/env_dpdk/memory.o 00:13:57.194 CC lib/env_dpdk/pci.o 00:13:57.194 CC lib/rdma/common.o 00:13:57.194 CC lib/vmd/vmd.o 00:13:57.194 CC lib/json/json_parse.o 00:13:57.194 CC lib/env_dpdk/init.o 00:13:57.194 CC lib/conf/conf.o 00:13:57.452 CC lib/env_dpdk/threads.o 00:13:57.452 CC lib/json/json_util.o 00:13:57.452 LIB libspdk_conf.a 00:13:57.452 CC lib/vmd/led.o 00:13:57.452 CC lib/env_dpdk/pci_ioat.o 00:13:57.452 CC lib/rdma/rdma_verbs.o 00:13:57.452 CC lib/env_dpdk/pci_virtio.o 00:13:57.710 CC lib/env_dpdk/pci_vmd.o 00:13:57.710 CC lib/env_dpdk/pci_idxd.o 00:13:57.710 CC lib/json/json_write.o 00:13:57.710 LIB libspdk_idxd.a 00:13:57.710 CC lib/env_dpdk/pci_event.o 00:13:57.710 LIB libspdk_vmd.a 00:13:57.710 CC lib/env_dpdk/sigbus_handler.o 00:13:57.710 CC lib/env_dpdk/pci_dpdk.o 00:13:57.710 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:57.710 LIB libspdk_rdma.a 00:13:57.710 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:57.710 LIB libspdk_json.a 00:13:57.968 CC lib/jsonrpc/jsonrpc_server.o 00:13:57.968 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:57.968 CC lib/jsonrpc/jsonrpc_client.o 00:13:57.968 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:58.227 LIB libspdk_env_dpdk.a 00:13:58.227 LIB libspdk_jsonrpc.a 00:13:58.485 CC lib/rpc/rpc.o 00:13:58.742 LIB libspdk_rpc.a 00:13:59.000 CC lib/keyring/keyring.o 00:13:59.000 CC lib/notify/notify.o 00:13:59.000 CC lib/trace/trace.o 00:13:59.000 CC lib/notify/notify_rpc.o 00:13:59.000 CC lib/trace/trace_flags.o 00:13:59.000 CC lib/keyring/keyring_rpc.o 00:13:59.001 CC lib/trace/trace_rpc.o 00:13:59.001 LIB libspdk_notify.a 00:13:59.001 LIB libspdk_keyring.a 00:13:59.259 LIB libspdk_trace.a 00:13:59.517 CC lib/thread/thread.o 00:13:59.517 CC lib/sock/sock.o 00:13:59.517 CC lib/thread/iobuf.o 00:13:59.517 CC lib/sock/sock_rpc.o 00:13:59.775 LIB libspdk_sock.a 00:14:00.034 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:00.034 CC lib/nvme/nvme_ctrlr.o 00:14:00.034 CC lib/nvme/nvme_fabric.o 00:14:00.034 CC lib/nvme/nvme_ns_cmd.o 00:14:00.034 CC lib/nvme/nvme_ns.o 00:14:00.034 CC lib/nvme/nvme_pcie_common.o 00:14:00.034 CC lib/nvme/nvme_pcie.o 00:14:00.034 CC lib/nvme/nvme_qpair.o 00:14:00.034 CC lib/nvme/nvme.o 00:14:00.034 LIB libspdk_thread.a 00:14:00.292 CC lib/accel/accel.o 00:14:00.549 CC lib/nvme/nvme_quirks.o 00:14:00.550 CC lib/nvme/nvme_transport.o 00:14:00.550 CC lib/nvme/nvme_discovery.o 00:14:00.550 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:00.550 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:00.807 CC lib/nvme/nvme_tcp.o 00:14:00.807 CC lib/nvme/nvme_opal.o 00:14:00.807 CC lib/nvme/nvme_io_msg.o 00:14:00.807 CC lib/accel/accel_rpc.o 00:14:01.064 CC lib/nvme/nvme_poll_group.o 00:14:01.064 CC lib/accel/accel_sw.o 00:14:01.064 CC lib/nvme/nvme_zns.o 00:14:01.064 CC lib/nvme/nvme_stubs.o 00:14:01.064 CC lib/nvme/nvme_auth.o 00:14:01.064 CC lib/blob/blobstore.o 00:14:01.322 LIB libspdk_accel.a 00:14:01.322 CC lib/init/json_config.o 00:14:01.322 CC lib/nvme/nvme_cuse.o 00:14:01.322 CC lib/nvme/nvme_rdma.o 00:14:01.322 CC lib/init/subsystem.o 00:14:01.322 CC lib/virtio/virtio.o 00:14:01.580 CC lib/init/subsystem_rpc.o 00:14:01.580 CC lib/init/rpc.o 00:14:01.580 CC lib/blob/request.o 00:14:01.580 CC lib/virtio/virtio_vhost_user.o 00:14:01.580 CC lib/virtio/virtio_vfio_user.o 00:14:01.580 CC lib/virtio/virtio_pci.o 00:14:01.580 LIB libspdk_init.a 00:14:01.580 CC lib/blob/zeroes.o 00:14:01.838 CC lib/blob/blob_bs_dev.o 00:14:01.838 CC lib/bdev/bdev.o 00:14:01.838 CC lib/bdev/bdev_rpc.o 00:14:01.838 CC lib/bdev/bdev_zone.o 00:14:01.838 CC lib/bdev/part.o 00:14:01.838 LIB libspdk_virtio.a 00:14:01.838 CC lib/bdev/scsi_nvme.o 00:14:01.838 CC lib/event/app.o 00:14:01.838 CC lib/event/reactor.o 00:14:01.838 CC lib/event/log_rpc.o 00:14:02.096 CC lib/event/app_rpc.o 00:14:02.096 CC lib/event/scheduler_static.o 00:14:02.354 LIB libspdk_event.a 00:14:02.354 LIB libspdk_nvme.a 00:14:02.919 LIB libspdk_blob.a 00:14:03.184 CC lib/blobfs/blobfs.o 00:14:03.184 CC lib/blobfs/tree.o 00:14:03.184 CC lib/lvol/lvol.o 00:14:03.481 LIB libspdk_bdev.a 00:14:03.481 CC lib/nbd/nbd.o 00:14:03.481 CC lib/nvmf/ctrlr.o 00:14:03.481 CC lib/scsi/dev.o 00:14:03.481 CC lib/nbd/nbd_rpc.o 00:14:03.481 CC lib/ftl/ftl_core.o 00:14:03.481 CC lib/nvmf/ctrlr_discovery.o 00:14:03.481 CC lib/scsi/lun.o 00:14:03.481 CC lib/nvmf/ctrlr_bdev.o 00:14:03.738 LIB libspdk_lvol.a 00:14:03.738 LIB libspdk_blobfs.a 00:14:03.738 CC lib/scsi/port.o 00:14:03.738 CC lib/scsi/scsi.o 00:14:03.738 CC lib/scsi/scsi_bdev.o 00:14:03.738 CC lib/ftl/ftl_init.o 00:14:03.738 CC lib/nvmf/subsystem.o 00:14:03.738 CC lib/nvmf/nvmf.o 00:14:03.738 CC lib/scsi/scsi_pr.o 00:14:03.996 CC lib/ftl/ftl_layout.o 00:14:03.996 LIB libspdk_nbd.a 00:14:03.996 CC lib/nvmf/nvmf_rpc.o 00:14:03.996 CC lib/nvmf/transport.o 00:14:03.996 CC lib/ftl/ftl_debug.o 00:14:03.996 CC lib/nvmf/tcp.o 00:14:03.996 CC lib/scsi/scsi_rpc.o 00:14:04.252 CC lib/ftl/ftl_io.o 00:14:04.252 CC lib/ftl/ftl_sb.o 00:14:04.252 CC lib/scsi/task.o 00:14:04.252 CC lib/ftl/ftl_l2p.o 00:14:04.252 CC lib/nvmf/stubs.o 00:14:04.252 CC lib/ftl/ftl_l2p_flat.o 00:14:04.252 CC lib/ftl/ftl_nv_cache.o 00:14:04.252 CC lib/ftl/ftl_band.o 00:14:04.252 LIB libspdk_scsi.a 00:14:04.509 CC lib/ftl/ftl_band_ops.o 00:14:04.509 CC lib/nvmf/rdma.o 00:14:04.509 CC lib/ftl/ftl_writer.o 00:14:04.509 CC lib/iscsi/conn.o 00:14:04.509 CC lib/iscsi/init_grp.o 00:14:04.509 CC lib/ftl/ftl_rq.o 00:14:04.509 CC lib/iscsi/iscsi.o 00:14:04.509 CC lib/ftl/ftl_reloc.o 00:14:04.509 CC lib/ftl/ftl_l2p_cache.o 00:14:04.765 CC lib/ftl/ftl_p2l.o 00:14:04.765 CC lib/ftl/mngt/ftl_mngt.o 00:14:04.765 CC lib/iscsi/md5.o 00:14:04.765 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:14:04.765 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:14:04.765 CC lib/iscsi/param.o 00:14:05.022 CC lib/iscsi/portal_grp.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_startup.o 00:14:05.022 CC lib/iscsi/tgt_node.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_md.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_misc.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:14:05.022 CC lib/ftl/mngt/ftl_mngt_band.o 00:14:05.022 CC lib/iscsi/iscsi_subsystem.o 00:14:05.279 CC lib/iscsi/iscsi_rpc.o 00:14:05.279 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:05.279 CC lib/iscsi/task.o 00:14:05.279 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:05.279 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:05.279 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:05.279 CC lib/ftl/utils/ftl_conf.o 00:14:05.279 CC lib/ftl/utils/ftl_md.o 00:14:05.279 CC lib/ftl/utils/ftl_mempool.o 00:14:05.279 CC lib/ftl/utils/ftl_bitmap.o 00:14:05.279 CC lib/ftl/utils/ftl_property.o 00:14:05.536 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:05.536 LIB libspdk_iscsi.a 00:14:05.536 LIB libspdk_nvmf.a 00:14:05.536 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:05.536 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:05.536 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:05.536 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:05.536 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:05.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:05.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:05.536 CC lib/vhost/vhost.o 00:14:05.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:05.795 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:05.795 CC lib/ftl/base/ftl_base_dev.o 00:14:05.795 CC lib/vhost/vhost_rpc.o 00:14:05.795 CC lib/ftl/base/ftl_base_bdev.o 00:14:05.795 CC lib/ftl/ftl_trace.o 00:14:05.795 CC lib/vhost/vhost_scsi.o 00:14:05.795 CC lib/vhost/vhost_blk.o 00:14:05.795 CC lib/vhost/rte_vhost_user.o 00:14:05.795 LIB libspdk_ftl.a 00:14:06.360 LIB libspdk_vhost.a 00:14:06.618 CC module/env_dpdk/env_dpdk_rpc.o 00:14:06.875 CC module/blob/bdev/blob_bdev.o 00:14:06.875 CC module/keyring/file/keyring.o 00:14:06.875 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:06.875 CC module/accel/dsa/accel_dsa.o 00:14:06.875 CC module/scheduler/gscheduler/gscheduler.o 00:14:06.875 CC module/sock/posix/posix.o 00:14:06.875 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:06.875 CC module/accel/ioat/accel_ioat.o 00:14:06.875 CC module/accel/error/accel_error.o 00:14:06.875 LIB libspdk_env_dpdk_rpc.a 00:14:06.875 CC module/accel/error/accel_error_rpc.o 00:14:06.875 LIB libspdk_scheduler_dpdk_governor.a 00:14:06.875 CC module/keyring/file/keyring_rpc.o 00:14:06.875 LIB libspdk_scheduler_gscheduler.a 00:14:06.875 CC module/accel/ioat/accel_ioat_rpc.o 00:14:06.875 CC module/accel/dsa/accel_dsa_rpc.o 00:14:06.875 LIB libspdk_scheduler_dynamic.a 00:14:06.875 LIB libspdk_accel_error.a 00:14:06.875 LIB libspdk_blob_bdev.a 00:14:07.133 LIB libspdk_keyring_file.a 00:14:07.133 LIB libspdk_accel_ioat.a 00:14:07.133 LIB libspdk_accel_dsa.a 00:14:07.133 CC module/accel/iaa/accel_iaa.o 00:14:07.133 CC module/accel/iaa/accel_iaa_rpc.o 00:14:07.133 CC module/bdev/delay/vbdev_delay.o 00:14:07.133 CC module/bdev/lvol/vbdev_lvol.o 00:14:07.133 CC module/bdev/gpt/gpt.o 00:14:07.133 CC module/blobfs/bdev/blobfs_bdev.o 00:14:07.133 CC module/bdev/malloc/bdev_malloc.o 00:14:07.133 CC module/bdev/null/bdev_null.o 00:14:07.133 CC module/bdev/error/vbdev_error.o 00:14:07.133 LIB libspdk_accel_iaa.a 00:14:07.391 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:07.391 CC module/bdev/nvme/bdev_nvme.o 00:14:07.391 LIB libspdk_sock_posix.a 00:14:07.391 CC module/bdev/gpt/vbdev_gpt.o 00:14:07.391 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:07.391 CC module/bdev/error/vbdev_error_rpc.o 00:14:07.391 CC module/bdev/null/bdev_null_rpc.o 00:14:07.391 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:07.391 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:07.391 CC module/bdev/nvme/nvme_rpc.o 00:14:07.391 LIB libspdk_bdev_malloc.a 00:14:07.391 LIB libspdk_bdev_error.a 00:14:07.391 LIB libspdk_blobfs_bdev.a 00:14:07.648 CC module/bdev/nvme/bdev_mdns_client.o 00:14:07.648 LIB libspdk_bdev_gpt.a 00:14:07.648 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:07.648 LIB libspdk_bdev_null.a 00:14:07.648 LIB libspdk_bdev_delay.a 00:14:07.648 CC module/bdev/passthru/vbdev_passthru.o 00:14:07.648 CC module/bdev/raid/bdev_raid.o 00:14:07.648 CC module/bdev/nvme/vbdev_opal.o 00:14:07.648 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:07.648 CC module/bdev/split/vbdev_split.o 00:14:07.648 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:07.648 CC module/bdev/aio/bdev_aio.o 00:14:07.906 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:07.906 LIB libspdk_bdev_lvol.a 00:14:07.906 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:07.906 CC module/bdev/split/vbdev_split_rpc.o 00:14:07.906 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:07.906 CC module/bdev/aio/bdev_aio_rpc.o 00:14:07.906 LIB libspdk_bdev_passthru.a 00:14:07.906 CC module/bdev/ftl/bdev_ftl.o 00:14:07.906 LIB libspdk_bdev_split.a 00:14:07.906 CC module/bdev/raid/bdev_raid_rpc.o 00:14:08.164 CC module/bdev/raid/bdev_raid_sb.o 00:14:08.164 CC module/bdev/raid/raid0.o 00:14:08.164 LIB libspdk_bdev_zone_block.a 00:14:08.164 LIB libspdk_bdev_aio.a 00:14:08.164 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:08.164 CC module/bdev/raid/raid1.o 00:14:08.164 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:08.164 CC module/bdev/daos/bdev_daos.o 00:14:08.164 CC module/bdev/raid/concat.o 00:14:08.164 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:08.164 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:08.164 CC module/bdev/daos/bdev_daos_rpc.o 00:14:08.164 LIB libspdk_bdev_ftl.a 00:14:08.421 LIB libspdk_bdev_raid.a 00:14:08.421 LIB libspdk_bdev_virtio.a 00:14:08.421 LIB libspdk_bdev_daos.a 00:14:08.679 LIB libspdk_bdev_nvme.a 00:14:09.243 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:09.243 CC module/event/subsystems/scheduler/scheduler.o 00:14:09.243 CC module/event/subsystems/sock/sock.o 00:14:09.243 CC module/event/subsystems/keyring/keyring.o 00:14:09.243 CC module/event/subsystems/iobuf/iobuf.o 00:14:09.243 CC module/event/subsystems/vmd/vmd.o 00:14:09.243 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:09.243 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:09.243 LIB libspdk_event_keyring.a 00:14:09.243 LIB libspdk_event_sock.a 00:14:09.243 LIB libspdk_event_scheduler.a 00:14:09.243 LIB libspdk_event_vhost_blk.a 00:14:09.243 LIB libspdk_event_iobuf.a 00:14:09.243 LIB libspdk_event_vmd.a 00:14:09.501 CC module/event/subsystems/accel/accel.o 00:14:09.501 LIB libspdk_event_accel.a 00:14:09.760 CC module/event/subsystems/bdev/bdev.o 00:14:10.018 LIB libspdk_event_bdev.a 00:14:10.018 CC module/event/subsystems/nbd/nbd.o 00:14:10.018 CC module/event/subsystems/scsi/scsi.o 00:14:10.018 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:10.018 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:10.276 LIB libspdk_event_nbd.a 00:14:10.276 LIB libspdk_event_scsi.a 00:14:10.545 LIB libspdk_event_nvmf.a 00:14:10.545 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:10.545 CC module/event/subsystems/iscsi/iscsi.o 00:14:10.806 LIB libspdk_event_vhost_scsi.a 00:14:10.806 LIB libspdk_event_iscsi.a 00:14:10.806 CC app/trace_record/trace_record.o 00:14:10.806 CXX app/trace/trace.o 00:14:11.063 CC app/nvmf_tgt/nvmf_main.o 00:14:11.063 CC app/iscsi_tgt/iscsi_tgt.o 00:14:11.063 CC examples/accel/perf/accel_perf.o 00:14:11.063 CC app/spdk_tgt/spdk_tgt.o 00:14:11.063 CC test/accel/dif/dif.o 00:14:11.063 CC test/blobfs/mkfs/mkfs.o 00:14:11.063 CC test/bdev/bdevio/bdevio.o 00:14:11.063 CC test/app/bdev_svc/bdev_svc.o 00:14:11.063 LINK spdk_trace_record 00:14:11.063 LINK nvmf_tgt 00:14:11.321 LINK iscsi_tgt 00:14:11.321 LINK spdk_trace 00:14:11.321 LINK bdev_svc 00:14:11.321 LINK spdk_tgt 00:14:11.321 LINK mkfs 00:14:11.321 LINK dif 00:14:11.321 LINK accel_perf 00:14:11.321 LINK bdevio 00:14:11.581 CC app/spdk_lspci/spdk_lspci.o 00:14:11.581 CC app/spdk_nvme_perf/perf.o 00:14:11.581 LINK spdk_lspci 00:14:12.148 LINK spdk_nvme_perf 00:14:12.148 CC examples/bdev/hello_world/hello_bdev.o 00:14:12.406 LINK hello_bdev 00:14:12.406 CC app/spdk_nvme_identify/identify.o 00:14:12.973 LINK spdk_nvme_identify 00:14:12.973 CC app/spdk_nvme_discover/discovery_aer.o 00:14:13.231 LINK spdk_nvme_discover 00:14:13.489 TEST_HEADER include/spdk/accel_module.h 00:14:13.489 TEST_HEADER include/spdk/bit_pool.h 00:14:13.489 TEST_HEADER include/spdk/ioat.h 00:14:13.489 TEST_HEADER include/spdk/blobfs.h 00:14:13.489 TEST_HEADER include/spdk/notify.h 00:14:13.489 TEST_HEADER include/spdk/pipe.h 00:14:13.489 TEST_HEADER include/spdk/accel.h 00:14:13.489 TEST_HEADER include/spdk/mmio.h 00:14:13.489 TEST_HEADER include/spdk/version.h 00:14:13.489 TEST_HEADER include/spdk/trace_parser.h 00:14:13.489 TEST_HEADER include/spdk/opal_spec.h 00:14:13.489 TEST_HEADER include/spdk/nvmf.h 00:14:13.489 TEST_HEADER include/spdk/bdev.h 00:14:13.489 TEST_HEADER include/spdk/hexlify.h 00:14:13.489 TEST_HEADER include/spdk/likely.h 00:14:13.489 TEST_HEADER include/spdk/keyring_module.h 00:14:13.489 TEST_HEADER include/spdk/memory.h 00:14:13.489 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:13.489 TEST_HEADER include/spdk/dma.h 00:14:13.489 TEST_HEADER include/spdk/nbd.h 00:14:13.489 TEST_HEADER include/spdk/env.h 00:14:13.489 TEST_HEADER include/spdk/nvme_zns.h 00:14:13.489 TEST_HEADER include/spdk/env_dpdk.h 00:14:13.489 TEST_HEADER include/spdk/init.h 00:14:13.489 TEST_HEADER include/spdk/fd_group.h 00:14:13.489 TEST_HEADER include/spdk/bdev_module.h 00:14:13.489 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:13.489 TEST_HEADER include/spdk/opal.h 00:14:13.489 TEST_HEADER include/spdk/event.h 00:14:13.489 TEST_HEADER include/spdk/keyring.h 00:14:13.489 TEST_HEADER include/spdk/base64.h 00:14:13.489 TEST_HEADER include/spdk/nvme_intel.h 00:14:13.489 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:13.489 TEST_HEADER include/spdk/vhost.h 00:14:13.489 TEST_HEADER include/spdk/fd.h 00:14:13.489 TEST_HEADER include/spdk/barrier.h 00:14:13.489 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:13.489 TEST_HEADER include/spdk/zipf.h 00:14:13.489 TEST_HEADER include/spdk/scheduler.h 00:14:13.489 TEST_HEADER include/spdk/dif.h 00:14:13.489 TEST_HEADER include/spdk/scsi_spec.h 00:14:13.489 TEST_HEADER include/spdk/blob.h 00:14:13.489 TEST_HEADER include/spdk/cpuset.h 00:14:13.489 TEST_HEADER include/spdk/thread.h 00:14:13.489 TEST_HEADER include/spdk/tree.h 00:14:13.489 TEST_HEADER include/spdk/xor.h 00:14:13.489 TEST_HEADER include/spdk/assert.h 00:14:13.489 TEST_HEADER include/spdk/file.h 00:14:13.489 TEST_HEADER include/spdk/endian.h 00:14:13.489 TEST_HEADER include/spdk/pci_ids.h 00:14:13.489 TEST_HEADER include/spdk/util.h 00:14:13.489 TEST_HEADER include/spdk/log.h 00:14:13.489 TEST_HEADER include/spdk/sock.h 00:14:13.489 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:13.489 TEST_HEADER include/spdk/config.h 00:14:13.489 TEST_HEADER include/spdk/histogram_data.h 00:14:13.489 TEST_HEADER include/spdk/nvmf_spec.h 00:14:13.489 TEST_HEADER include/spdk/idxd_spec.h 00:14:13.489 TEST_HEADER include/spdk/crc16.h 00:14:13.489 TEST_HEADER include/spdk/bdev_zone.h 00:14:13.489 TEST_HEADER include/spdk/stdinc.h 00:14:13.489 TEST_HEADER include/spdk/scsi.h 00:14:13.489 TEST_HEADER include/spdk/jsonrpc.h 00:14:13.489 TEST_HEADER include/spdk/blob_bdev.h 00:14:13.489 TEST_HEADER include/spdk/crc32.h 00:14:13.489 TEST_HEADER include/spdk/nvmf_transport.h 00:14:13.489 TEST_HEADER include/spdk/vmd.h 00:14:13.489 TEST_HEADER include/spdk/uuid.h 00:14:13.489 TEST_HEADER include/spdk/idxd.h 00:14:13.489 TEST_HEADER include/spdk/crc64.h 00:14:13.489 TEST_HEADER include/spdk/nvme.h 00:14:13.489 TEST_HEADER include/spdk/iscsi_spec.h 00:14:13.489 TEST_HEADER include/spdk/queue.h 00:14:13.489 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:13.489 CC app/spdk_top/spdk_top.o 00:14:13.489 TEST_HEADER include/spdk/lvol.h 00:14:13.489 CC test/dma/test_dma/test_dma.o 00:14:13.489 TEST_HEADER include/spdk/ftl.h 00:14:13.489 TEST_HEADER include/spdk/trace.h 00:14:13.489 TEST_HEADER include/spdk/ioat_spec.h 00:14:13.489 TEST_HEADER include/spdk/conf.h 00:14:13.489 TEST_HEADER include/spdk/ublk.h 00:14:13.489 TEST_HEADER include/spdk/bit_array.h 00:14:13.489 TEST_HEADER include/spdk/nvme_spec.h 00:14:13.489 TEST_HEADER include/spdk/string.h 00:14:13.489 TEST_HEADER include/spdk/gpt_spec.h 00:14:13.489 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:13.489 TEST_HEADER include/spdk/json.h 00:14:13.489 TEST_HEADER include/spdk/reduce.h 00:14:13.489 TEST_HEADER include/spdk/rpc.h 00:14:13.489 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:13.489 CXX test/cpp_headers/accel_module.o 00:14:13.489 CC app/vhost/vhost.o 00:14:13.748 CC test/env/mem_callbacks/mem_callbacks.o 00:14:13.748 CXX test/cpp_headers/bit_pool.o 00:14:13.748 LINK nvme_fuzz 00:14:13.748 LINK vhost 00:14:13.748 LINK test_dma 00:14:13.748 CXX test/cpp_headers/ioat.o 00:14:14.006 CC app/spdk_dd/spdk_dd.o 00:14:14.006 CC app/fio/nvme/fio_plugin.o 00:14:14.006 CXX test/cpp_headers/blobfs.o 00:14:14.006 CXX test/cpp_headers/notify.o 00:14:14.288 LINK mem_callbacks 00:14:14.288 LINK spdk_dd 00:14:14.288 CXX test/cpp_headers/pipe.o 00:14:14.288 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:14.288 LINK spdk_top 00:14:14.288 CC examples/bdev/bdevperf/bdevperf.o 00:14:14.546 LINK spdk_nvme 00:14:14.546 CXX test/cpp_headers/accel.o 00:14:14.546 CC test/env/vtophys/vtophys.o 00:14:14.546 CXX test/cpp_headers/mmio.o 00:14:14.546 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:14.546 LINK vtophys 00:14:14.802 CXX test/cpp_headers/version.o 00:14:14.802 CXX test/cpp_headers/trace_parser.o 00:14:14.802 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:14.802 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:14.802 CXX test/cpp_headers/opal_spec.o 00:14:14.802 LINK bdevperf 00:14:14.802 LINK env_dpdk_post_init 00:14:15.061 CXX test/cpp_headers/nvmf.o 00:14:15.061 LINK vhost_fuzz 00:14:15.061 CXX test/cpp_headers/bdev.o 00:14:15.319 LINK iscsi_fuzz 00:14:15.319 CC test/env/memory/memory_ut.o 00:14:15.319 CXX test/cpp_headers/hexlify.o 00:14:15.319 CC app/fio/bdev/fio_plugin.o 00:14:15.577 CXX test/cpp_headers/likely.o 00:14:15.577 CXX test/cpp_headers/keyring_module.o 00:14:15.577 CXX test/cpp_headers/memory.o 00:14:15.577 CC test/env/pci/pci_ut.o 00:14:15.835 CXX test/cpp_headers/vfio_user_pci.o 00:14:15.835 CXX test/cpp_headers/dma.o 00:14:15.835 LINK spdk_bdev 00:14:15.835 LINK memory_ut 00:14:15.835 CC test/app/histogram_perf/histogram_perf.o 00:14:15.835 CXX test/cpp_headers/nbd.o 00:14:15.835 CXX test/cpp_headers/env.o 00:14:15.835 CXX test/cpp_headers/nvme_zns.o 00:14:15.835 CC test/event/event_perf/event_perf.o 00:14:16.092 CXX test/cpp_headers/env_dpdk.o 00:14:16.092 LINK pci_ut 00:14:16.092 LINK histogram_perf 00:14:16.092 CXX test/cpp_headers/init.o 00:14:16.092 CXX test/cpp_headers/fd_group.o 00:14:16.092 LINK event_perf 00:14:16.092 CC examples/blob/hello_world/hello_blob.o 00:14:16.092 CXX test/cpp_headers/bdev_module.o 00:14:16.092 CXX test/cpp_headers/opal.o 00:14:16.092 CXX test/cpp_headers/event.o 00:14:16.092 CXX test/cpp_headers/keyring.o 00:14:16.350 CXX test/cpp_headers/base64.o 00:14:16.350 LINK hello_blob 00:14:16.350 CXX test/cpp_headers/nvme_intel.o 00:14:16.350 CXX test/cpp_headers/blobfs_bdev.o 00:14:16.350 CC examples/blob/cli/blobcli.o 00:14:16.350 CXX test/cpp_headers/vhost.o 00:14:16.608 CXX test/cpp_headers/fd.o 00:14:16.608 CXX test/cpp_headers/barrier.o 00:14:16.608 CC test/app/jsoncat/jsoncat.o 00:14:16.608 CC test/lvol/esnap/esnap.o 00:14:16.608 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:16.608 CC test/event/reactor/reactor.o 00:14:16.608 CXX test/cpp_headers/zipf.o 00:14:16.608 LINK jsoncat 00:14:16.608 CC test/event/reactor_perf/reactor_perf.o 00:14:16.865 LINK blobcli 00:14:16.865 CXX test/cpp_headers/scheduler.o 00:14:16.865 LINK reactor 00:14:16.865 LINK reactor_perf 00:14:16.865 CC examples/ioat/perf/perf.o 00:14:16.865 CC test/nvme/aer/aer.o 00:14:16.865 CXX test/cpp_headers/dif.o 00:14:17.123 LINK ioat_perf 00:14:17.123 LINK aer 00:14:17.123 CXX test/cpp_headers/scsi_spec.o 00:14:17.123 CC test/app/stub/stub.o 00:14:17.380 CXX test/cpp_headers/blob.o 00:14:17.380 LINK stub 00:14:17.380 CC test/event/app_repeat/app_repeat.o 00:14:17.380 CXX test/cpp_headers/cpuset.o 00:14:17.380 CC test/event/scheduler/scheduler.o 00:14:17.646 CXX test/cpp_headers/thread.o 00:14:17.646 LINK app_repeat 00:14:17.646 CC examples/ioat/verify/verify.o 00:14:17.646 LINK scheduler 00:14:17.646 CXX test/cpp_headers/tree.o 00:14:17.646 CXX test/cpp_headers/xor.o 00:14:17.646 CXX test/cpp_headers/assert.o 00:14:17.922 CXX test/cpp_headers/file.o 00:14:17.922 LINK verify 00:14:17.922 CC test/nvme/reset/reset.o 00:14:17.922 CXX test/cpp_headers/endian.o 00:14:17.922 CXX test/cpp_headers/pci_ids.o 00:14:18.180 CXX test/cpp_headers/util.o 00:14:18.180 LINK reset 00:14:18.180 CXX test/cpp_headers/log.o 00:14:18.180 CXX test/cpp_headers/sock.o 00:14:18.180 CC test/rpc_client/rpc_client_test.o 00:14:18.180 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:18.180 CXX test/cpp_headers/config.o 00:14:18.439 CXX test/cpp_headers/histogram_data.o 00:14:18.439 CXX test/cpp_headers/nvmf_spec.o 00:14:18.439 CXX test/cpp_headers/idxd_spec.o 00:14:18.439 LINK rpc_client_test 00:14:18.439 CXX test/cpp_headers/crc16.o 00:14:18.439 CXX test/cpp_headers/bdev_zone.o 00:14:18.439 CXX test/cpp_headers/stdinc.o 00:14:18.439 CXX test/cpp_headers/scsi.o 00:14:18.439 CC examples/nvme/hello_world/hello_world.o 00:14:18.439 CXX test/cpp_headers/jsonrpc.o 00:14:18.697 CXX test/cpp_headers/blob_bdev.o 00:14:18.697 CXX test/cpp_headers/crc32.o 00:14:18.697 LINK hello_world 00:14:18.697 CC examples/sock/hello_world/hello_sock.o 00:14:18.697 CXX test/cpp_headers/nvmf_transport.o 00:14:18.697 CXX test/cpp_headers/vmd.o 00:14:18.697 CXX test/cpp_headers/uuid.o 00:14:18.955 CC examples/vmd/lsvmd/lsvmd.o 00:14:18.955 CXX test/cpp_headers/idxd.o 00:14:18.955 CXX test/cpp_headers/crc64.o 00:14:18.955 LINK hello_sock 00:14:18.955 CXX test/cpp_headers/nvme.o 00:14:18.955 CC test/nvme/sgl/sgl.o 00:14:18.955 LINK lsvmd 00:14:18.955 CC test/thread/poller_perf/poller_perf.o 00:14:18.955 CXX test/cpp_headers/iscsi_spec.o 00:14:18.955 CXX test/cpp_headers/queue.o 00:14:18.955 CXX test/cpp_headers/nvmf_cmd.o 00:14:19.214 CXX test/cpp_headers/lvol.o 00:14:19.214 LINK poller_perf 00:14:19.214 LINK sgl 00:14:19.214 CXX test/cpp_headers/ftl.o 00:14:19.214 CXX test/cpp_headers/trace.o 00:14:19.214 CXX test/cpp_headers/ioat_spec.o 00:14:19.471 CXX test/cpp_headers/conf.o 00:14:19.471 LINK esnap 00:14:19.471 CXX test/cpp_headers/ublk.o 00:14:19.471 CXX test/cpp_headers/bit_array.o 00:14:19.471 CC examples/nvme/reconnect/reconnect.o 00:14:19.471 CC test/thread/lock/spdk_lock.o 00:14:19.729 CXX test/cpp_headers/nvme_spec.o 00:14:19.729 CXX test/cpp_headers/string.o 00:14:19.729 CC examples/nvmf/nvmf/nvmf.o 00:14:19.729 CC examples/util/zipf/zipf.o 00:14:19.729 CXX test/cpp_headers/gpt_spec.o 00:14:19.729 LINK reconnect 00:14:19.986 CXX test/cpp_headers/nvme_ocssd.o 00:14:19.986 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:14:19.986 CC examples/vmd/led/led.o 00:14:19.986 LINK zipf 00:14:19.986 CXX test/cpp_headers/json.o 00:14:19.986 CC examples/thread/thread/thread_ex.o 00:14:19.986 LINK led 00:14:19.986 CC test/nvme/e2edp/nvme_dp.o 00:14:19.986 LINK nvmf 00:14:19.986 LINK histogram_ut 00:14:19.986 CXX test/cpp_headers/reduce.o 00:14:20.243 CC test/nvme/overhead/overhead.o 00:14:20.243 CXX test/cpp_headers/rpc.o 00:14:20.243 LINK thread 00:14:20.243 LINK nvme_dp 00:14:20.501 CXX test/cpp_headers/vfio_user_spec.o 00:14:20.501 CC test/unit/lib/accel/accel.c/accel_ut.o 00:14:20.501 LINK overhead 00:14:20.501 CC examples/idxd/perf/perf.o 00:14:20.501 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:20.759 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:20.759 LINK idxd_perf 00:14:20.759 LINK spdk_lock 00:14:20.759 CC examples/nvme/arbitration/arbitration.o 00:14:20.759 LINK interrupt_tgt 00:14:21.017 LINK nvme_manage 00:14:21.017 LINK arbitration 00:14:21.276 CC test/nvme/err_injection/err_injection.o 00:14:21.276 CC test/nvme/startup/startup.o 00:14:21.276 CC examples/nvme/hotplug/hotplug.o 00:14:21.276 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:21.276 LINK err_injection 00:14:21.276 LINK startup 00:14:21.534 LINK cmb_copy 00:14:21.534 LINK hotplug 00:14:21.792 LINK accel_ut 00:14:21.792 CC examples/nvme/abort/abort.o 00:14:21.792 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:21.792 CC test/nvme/reserve/reserve.o 00:14:22.051 LINK pmr_persistence 00:14:22.051 LINK reserve 00:14:22.051 LINK abort 00:14:22.051 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:14:22.051 CC test/nvme/simple_copy/simple_copy.o 00:14:22.051 CC test/nvme/connect_stress/connect_stress.o 00:14:22.309 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:14:22.309 CC test/nvme/boot_partition/boot_partition.o 00:14:22.309 LINK connect_stress 00:14:22.309 LINK simple_copy 00:14:22.309 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:14:22.309 LINK boot_partition 00:14:22.309 CC test/unit/lib/dma/dma.c/dma_ut.o 00:14:22.567 LINK tree_ut 00:14:22.567 LINK blob_bdev_ut 00:14:22.825 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:14:22.825 LINK dma_ut 00:14:22.825 CC test/nvme/compliance/nvme_compliance.o 00:14:22.825 CC test/unit/lib/event/app.c/app_ut.o 00:14:22.825 CC test/unit/lib/blob/blob.c/blob_ut.o 00:14:22.825 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:14:23.083 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:14:23.083 CC test/nvme/fused_ordering/fused_ordering.o 00:14:23.083 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:23.083 LINK nvme_compliance 00:14:23.341 LINK ioat_ut 00:14:23.341 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:14:23.341 LINK fused_ordering 00:14:23.341 LINK doorbell_aers 00:14:23.341 LINK app_ut 00:14:23.341 LINK blobfs_async_ut 00:14:23.599 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:14:23.599 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:14:23.599 LINK blobfs_sync_ut 00:14:23.857 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:14:23.857 LINK blobfs_bdev_ut 00:14:23.857 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:14:23.857 LINK conn_ut 00:14:24.115 CC test/nvme/fdp/fdp.o 00:14:24.115 LINK reactor_ut 00:14:24.115 CC test/nvme/cuse/cuse.o 00:14:24.115 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:14:24.115 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:14:24.115 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:14:24.373 LINK fdp 00:14:24.373 LINK json_util_ut 00:14:24.373 LINK jsonrpc_server_ut 00:14:24.373 CC test/unit/lib/log/log.c/log_ut.o 00:14:24.632 LINK init_grp_ut 00:14:24.632 LINK json_parse_ut 00:14:24.632 LINK log_ut 00:14:24.632 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:14:24.632 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:14:24.632 LINK json_write_ut 00:14:24.890 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:14:24.890 CC test/unit/lib/notify/notify.c/notify_ut.o 00:14:24.890 LINK cuse 00:14:24.890 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:14:25.147 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:14:25.147 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:14:25.147 LINK notify_ut 00:14:25.147 CC test/unit/lib/sock/sock.c/sock_ut.o 00:14:25.147 LINK bdev_ut 00:14:25.405 CC test/unit/lib/thread/thread.c/thread_ut.o 00:14:25.405 LINK dev_ut 00:14:25.405 CC test/unit/lib/bdev/part.c/part_ut.o 00:14:25.662 LINK lun_ut 00:14:25.662 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:14:25.662 LINK nvme_ut 00:14:25.662 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:14:25.662 LINK lvol_ut 00:14:25.920 LINK scsi_ut 00:14:25.920 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:14:25.920 LINK sock_ut 00:14:26.178 CC test/unit/lib/util/base64.c/base64_ut.o 00:14:26.178 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:14:26.178 LINK iscsi_ut 00:14:26.178 CC test/unit/lib/sock/posix.c/posix_ut.o 00:14:26.459 LINK base64_ut 00:14:26.459 LINK scsi_bdev_ut 00:14:26.738 CC test/unit/lib/iscsi/param.c/param_ut.o 00:14:26.738 LINK thread_ut 00:14:26.738 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:14:26.738 LINK scsi_pr_ut 00:14:26.738 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:14:26.996 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:14:26.996 LINK posix_ut 00:14:26.996 LINK bit_array_ut 00:14:26.996 LINK cpuset_ut 00:14:26.996 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:14:26.996 LINK blob_ut 00:14:26.996 LINK param_ut 00:14:27.255 LINK tcp_ut 00:14:27.255 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:14:27.255 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:14:27.255 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:14:27.255 LINK pci_event_ut 00:14:27.255 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:14:27.255 LINK iobuf_ut 00:14:27.255 LINK crc16_ut 00:14:27.513 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:14:27.513 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:14:27.513 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:14:27.513 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:14:27.513 LINK crc32_ieee_ut 00:14:27.513 LINK subsystem_ut 00:14:27.513 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:14:27.771 LINK portal_grp_ut 00:14:27.771 LINK part_ut 00:14:27.771 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:14:27.771 LINK rpc_ut 00:14:27.771 LINK keyring_ut 00:14:28.029 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:14:28.029 LINK crc32c_ut 00:14:28.029 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:14:28.029 LINK idxd_user_ut 00:14:28.029 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:14:28.029 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:14:28.029 CC test/unit/lib/util/dif.c/dif_ut.o 00:14:28.029 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:14:28.029 LINK crc64_ut 00:14:28.029 CC test/unit/lib/rdma/common.c/common_ut.o 00:14:28.029 LINK nvme_ctrlr_ut 00:14:28.288 LINK rpc_ut 00:14:28.288 LINK scsi_nvme_ut 00:14:28.288 CC test/unit/lib/util/iov.c/iov_ut.o 00:14:28.288 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:14:28.545 LINK common_ut 00:14:28.545 LINK tgt_node_ut 00:14:28.545 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:14:28.545 LINK iov_ut 00:14:28.545 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:14:28.545 LINK idxd_ut 00:14:28.545 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:14:28.803 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:14:28.803 CC test/unit/lib/util/math.c/math_ut.o 00:14:28.803 LINK ftl_l2p_ut 00:14:28.803 LINK gpt_ut 00:14:28.803 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:14:28.803 LINK math_ut 00:14:28.803 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:14:29.061 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:14:29.061 LINK dif_ut 00:14:29.061 LINK ftl_io_ut 00:14:29.061 LINK ftl_bitmap_ut 00:14:29.061 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:14:29.061 LINK pipe_ut 00:14:29.320 CC test/unit/lib/util/string.c/string_ut.o 00:14:29.320 LINK nvme_ctrlr_cmd_ut 00:14:29.320 LINK vhost_ut 00:14:29.320 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:14:29.320 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:14:29.320 LINK ctrlr_ut 00:14:29.320 CC test/unit/lib/util/xor.c/xor_ut.o 00:14:29.320 LINK ftl_band_ut 00:14:29.577 LINK string_ut 00:14:29.578 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:14:29.578 LINK ftl_mempool_ut 00:14:29.578 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:14:29.578 LINK xor_ut 00:14:29.578 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:14:29.578 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:14:29.578 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:14:29.578 LINK vbdev_lvol_ut 00:14:29.835 LINK ftl_mngt_ut 00:14:29.835 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:14:29.835 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:14:29.835 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:14:30.092 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:14:30.092 LINK nvme_ctrlr_ocssd_cmd_ut 00:14:30.349 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:14:30.349 LINK ctrlr_bdev_ut 00:14:30.349 LINK nvme_ns_ut 00:14:30.349 LINK ftl_layout_upgrade_ut 00:14:30.349 LINK ftl_sb_ut 00:14:30.606 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:14:30.606 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:14:30.606 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:14:30.606 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:14:30.606 LINK ctrlr_discovery_ut 00:14:30.864 LINK subsystem_ut 00:14:30.864 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:14:31.121 LINK auth_ut 00:14:31.121 LINK bdev_ut 00:14:31.121 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:14:31.121 LINK nvmf_ut 00:14:31.121 LINK nvme_ns_cmd_ut 00:14:31.121 LINK bdev_raid_ut 00:14:31.378 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:14:31.378 LINK nvme_ns_ocssd_cmd_ut 00:14:31.378 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:14:31.378 LINK bdev_zone_ut 00:14:31.378 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:14:31.378 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:14:31.378 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:14:31.635 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:14:31.635 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:14:31.635 LINK nvme_pcie_ut 00:14:31.892 LINK vbdev_zone_block_ut 00:14:31.892 LINK bdev_raid_sb_ut 00:14:31.892 LINK nvme_quirks_ut 00:14:31.892 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:14:31.892 LINK nvme_poll_group_ut 00:14:32.150 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:14:32.150 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:14:32.150 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:14:32.150 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:14:32.150 LINK nvme_qpair_ut 00:14:32.407 LINK rdma_ut 00:14:32.407 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:14:32.407 LINK transport_ut 00:14:32.407 LINK concat_ut 00:14:32.664 LINK nvme_io_msg_ut 00:14:32.664 LINK nvme_transport_ut 00:14:32.664 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:14:32.664 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:14:32.664 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:14:32.921 LINK nvme_fabric_ut 00:14:32.921 LINK nvme_opal_ut 00:14:32.921 LINK nvme_pcie_common_ut 00:14:33.178 LINK raid1_ut 00:14:33.178 LINK nvme_tcp_ut 00:14:33.743 LINK nvme_cuse_ut 00:14:34.001 LINK bdev_nvme_ut 00:14:34.001 LINK nvme_rdma_ut 00:14:34.258 ************************************ 00:14:34.258 END TEST unittest_build 00:14:34.258 ************************************ 00:14:34.258 00:14:34.258 real 1m4.402s 00:14:34.258 user 5m41.423s 00:14:34.258 sys 1m27.372s 00:14:34.258 11:07:52 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:14:34.258 11:07:52 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:14:34.258 11:07:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:34.258 11:07:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:34.258 11:07:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:34.258 11:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.258 11:07:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:34.258 11:07:52 -- pm/common@44 -- $ pid=2880 00:14:34.258 11:07:52 -- pm/common@50 -- $ kill -TERM 2880 00:14:34.258 11:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.258 11:07:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:34.258 11:07:52 -- pm/common@44 -- $ pid=2881 00:14:34.258 11:07:52 -- pm/common@50 -- $ kill -TERM 2881 00:14:34.258 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:14:34.258 11:07:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.258 11:07:52 -- nvmf/common.sh@7 -- # uname -s 00:14:34.258 11:07:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.258 11:07:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.258 11:07:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.258 11:07:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.258 11:07:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.258 11:07:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.258 11:07:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.258 11:07:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.258 11:07:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.258 11:07:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.258 11:07:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71854b32-b001-4fee-b40d-cc51cc8503a4 00:14:34.258 11:07:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=71854b32-b001-4fee-b40d-cc51cc8503a4 00:14:34.258 11:07:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.258 11:07:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.258 11:07:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:34.258 11:07:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.258 11:07:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.258 11:07:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.258 11:07:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.258 11:07:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.258 11:07:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:14:34.258 11:07:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:14:34.258 11:07:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:14:34.258 11:07:52 -- paths/export.sh@5 -- # export PATH 00:14:34.259 11:07:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:14:34.259 11:07:52 -- nvmf/common.sh@47 -- # : 0 00:14:34.259 11:07:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.259 11:07:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.259 11:07:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.259 11:07:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.259 11:07:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.259 11:07:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.259 11:07:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.259 11:07:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.259 11:07:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:34.259 11:07:52 -- spdk/autotest.sh@32 -- # uname -s 00:14:34.259 11:07:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:34.259 11:07:52 -- spdk/autotest.sh@33 -- # old_core_pattern=core 00:14:34.259 11:07:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:34.259 11:07:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:34.259 11:07:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:34.259 11:07:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:34.259 modprobe: FATAL: Module nbd not found. 00:14:34.259 11:07:52 -- spdk/autotest.sh@44 -- # true 00:14:34.259 11:07:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:34.259 11:07:52 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:14:34.259 11:07:52 -- spdk/autotest.sh@48 -- # udevadm_pid=37064 00:14:34.259 11:07:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:34.259 11:07:52 -- pm/common@17 -- # local monitor 00:14:34.259 11:07:52 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:14:34.259 11:07:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.259 11:07:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.259 11:07:52 -- pm/common@25 -- # sleep 1 00:14:34.259 11:07:52 -- pm/common@21 -- # date +%s 00:14:34.259 11:07:52 -- pm/common@21 -- # date +%s 00:14:34.259 11:07:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715771272 00:14:34.259 11:07:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715771272 00:14:34.259 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715771272_collect-vmstat.pm.log 00:14:34.259 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715771272_collect-cpu-load.pm.log 00:14:35.628 11:07:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:35.628 11:07:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:35.628 11:07:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:35.628 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:14:35.628 11:07:53 -- spdk/autotest.sh@59 -- # create_test_list 00:14:35.628 11:07:53 -- common/autotest_common.sh@744 -- # xtrace_disable 00:14:35.628 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:14:35.628 11:07:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:35.628 11:07:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:35.628 11:07:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:35.628 11:07:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:35.628 11:07:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:35.628 11:07:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:35.628 11:07:53 -- common/autotest_common.sh@1451 -- # uname 00:14:35.628 11:07:53 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:14:35.628 11:07:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:35.628 11:07:53 -- common/autotest_common.sh@1471 -- # uname 00:14:35.628 11:07:53 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:14:35.628 11:07:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:14:35.628 11:07:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:14:35.628 11:07:53 -- spdk/autotest.sh@72 -- # hash lcov 00:14:35.628 11:07:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:14:35.628 11:07:53 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:14:35.628 --rc lcov_branch_coverage=1 00:14:35.628 --rc lcov_function_coverage=1 00:14:35.628 --rc genhtml_branch_coverage=1 00:14:35.628 --rc genhtml_function_coverage=1 00:14:35.628 --rc genhtml_legend=1 00:14:35.628 --rc geninfo_all_blocks=1 00:14:35.628 ' 00:14:35.628 11:07:53 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:14:35.628 --rc lcov_branch_coverage=1 00:14:35.628 --rc lcov_function_coverage=1 00:14:35.628 --rc genhtml_branch_coverage=1 00:14:35.628 --rc genhtml_function_coverage=1 00:14:35.628 --rc genhtml_legend=1 00:14:35.628 --rc geninfo_all_blocks=1 00:14:35.628 ' 00:14:35.628 11:07:53 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:14:35.628 --rc lcov_branch_coverage=1 00:14:35.628 --rc lcov_function_coverage=1 00:14:35.628 --rc genhtml_branch_coverage=1 00:14:35.628 --rc genhtml_function_coverage=1 00:14:35.628 --rc genhtml_legend=1 00:14:35.628 --rc geninfo_all_blocks=1 00:14:35.628 --no-external' 00:14:35.628 11:07:53 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:14:35.628 --rc lcov_branch_coverage=1 00:14:35.628 --rc lcov_function_coverage=1 00:14:35.628 --rc genhtml_branch_coverage=1 00:14:35.628 --rc genhtml_function_coverage=1 00:14:35.628 --rc genhtml_legend=1 00:14:35.628 --rc geninfo_all_blocks=1 00:14:35.628 --no-external' 00:14:35.628 11:07:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:14:35.628 lcov: LCOV version 1.15 00:14:35.628 11:07:53 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:43.827 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:14:43.827 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:14:43.827 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:14:43.827 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:14:43.827 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:14:43.827 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:15:05.751 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:15:05.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:15:05.752 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:15:05.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:15:52.440 11:09:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:15:52.440 11:09:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:52.440 11:09:05 -- common/autotest_common.sh@10 -- # set +x 00:15:52.440 11:09:05 -- spdk/autotest.sh@91 -- # rm -f 00:15:52.440 11:09:05 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.440 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.440 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:52.440 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.440 11:09:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:15:52.440 11:09:05 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:52.440 11:09:05 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:52.440 11:09:05 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:52.440 11:09:05 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:52.440 11:09:05 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:15:52.440 11:09:05 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:15:52.440 11:09:05 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:52.440 11:09:05 -- common/autotest_common.sh@1660 -- # return 1 00:15:52.440 11:09:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:15:52.440 11:09:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:52.440 11:09:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:52.440 11:09:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:15:52.440 11:09:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:15:52.440 11:09:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:52.440 No valid GPT data, bailing 00:15:52.440 11:09:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:52.440 11:09:05 -- scripts/common.sh@391 -- # pt= 00:15:52.440 11:09:05 -- scripts/common.sh@392 -- # return 1 00:15:52.440 11:09:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:52.440 1+0 records in 00:15:52.440 1+0 records out 00:15:52.440 1048576 bytes (1.0 MB) copied, 0.00281362 s, 373 MB/s 00:15:52.440 11:09:05 -- spdk/autotest.sh@118 -- # sync 00:15:52.440 11:09:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:52.440 11:09:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:52.440 11:09:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:52.440 11:09:06 -- spdk/autotest.sh@124 -- # uname -s 00:15:52.440 11:09:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:15:52.440 11:09:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:15:52.440 11:09:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.440 11:09:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.440 11:09:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.440 ************************************ 00:15:52.440 START TEST setup.sh 00:15:52.440 ************************************ 00:15:52.440 11:09:06 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:15:52.440 * Looking for test storage... 00:15:52.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:52.440 11:09:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:15:52.440 11:09:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:15:52.440 11:09:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:15:52.440 11:09:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.440 11:09:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.440 11:09:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:15:52.440 ************************************ 00:15:52.440 START TEST acl 00:15:52.440 ************************************ 00:15:52.440 11:09:06 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:15:52.440 * Looking for test storage... 00:15:52.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:52.440 11:09:07 setup.sh.acl -- common/autotest_common.sh@1660 -- # return 1 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:15:52.440 11:09:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:52.440 11:09:07 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.440 11:09:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:15:52.440 11:09:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.441 Hugepages 00:15:52.441 node hugesize free / total 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.441 00:15:52.441 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:15:52.441 11:09:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:15:52.441 11:09:07 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.441 11:09:07 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.441 11:09:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 ************************************ 00:15:52.441 START TEST denied 00:15:52.441 ************************************ 00:15:52.441 11:09:07 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:52.441 11:09:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.441 ************************************ 00:15:52.441 END TEST denied 00:15:52.441 ************************************ 00:15:52.441 00:15:52.441 real 0m0.552s 00:15:52.441 user 0m0.296s 00:15:52.441 sys 0m0.297s 00:15:52.441 11:09:08 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.441 11:09:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 11:09:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:15:52.441 11:09:08 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.441 11:09:08 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.441 11:09:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 ************************************ 00:15:52.441 START TEST allowed 00:15:52.441 ************************************ 00:15:52.441 11:09:08 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:52.441 11:09:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.441 ************************************ 00:15:52.441 END TEST allowed 00:15:52.441 ************************************ 00:15:52.441 00:15:52.441 real 0m0.660s 00:15:52.441 user 0m0.218s 00:15:52.441 sys 0m0.429s 00:15:52.441 11:09:08 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.441 11:09:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 00:15:52.441 real 0m1.913s 00:15:52.441 user 0m0.833s 00:15:52.441 sys 0m1.139s 00:15:52.441 11:09:08 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.441 ************************************ 00:15:52.441 END TEST acl 00:15:52.441 ************************************ 00:15:52.441 11:09:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 11:09:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:15:52.441 11:09:08 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.441 11:09:08 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.441 11:09:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 ************************************ 00:15:52.441 START TEST hugepages 00:15:52.441 ************************************ 00:15:52.441 11:09:08 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:15:52.441 * Looking for test storage... 00:15:52.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 4769308 kB' 'MemAvailable: 7431732 kB' 'Buffers: 2068 kB' 'Cached: 2853632 kB' 'SwapCached: 0 kB' 'Active: 2214260 kB' 'Inactive: 730724 kB' 'Active(anon): 89492 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 89304 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'KernelStack: 3744 kB' 'PageTables: 8208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4053424 kB' 'Committed_AS: 343404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38768 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:15:52.441 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:15:52.441 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.441 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.441 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 ************************************ 00:15:52.441 START TEST default_setup 00:15:52.441 ************************************ 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:15:52.441 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.442 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.442 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864680 kB' 'MemAvailable: 9527364 kB' 'Buffers: 2068 kB' 'Cached: 2853632 kB' 'SwapCached: 0 kB' 'Active: 2222528 kB' 'Inactive: 730984 kB' 'Active(anon): 97760 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96780 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'KernelStack: 3744 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 8192 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865200 kB' 'MemAvailable: 9527884 kB' 'Buffers: 2068 kB' 'Cached: 2853632 kB' 'SwapCached: 0 kB' 'Active: 2222528 kB' 'Inactive: 730984 kB' 'Active(anon): 97760 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96392 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'KernelStack: 3744 kB' 'PageTables: 8500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.442 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865460 kB' 'MemAvailable: 9528144 kB' 'Buffers: 2068 kB' 'Cached: 2853632 kB' 'SwapCached: 0 kB' 'Active: 2222268 kB' 'Inactive: 730984 kB' 'Active(anon): 97500 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96392 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'KernelStack: 3744 kB' 'PageTables: 8500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:15:52.443 nr_hugepages=1024 00:15:52.443 resv_hugepages=0 00:15:52.443 surplus_hugepages=0 00:15:52.443 anon_hugepages=8192 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865212 kB' 'MemAvailable: 9527896 kB' 'Buffers: 2068 kB' 'Cached: 2853632 kB' 'SwapCached: 0 kB' 'Active: 2222268 kB' 'Inactive: 730984 kB' 'Active(anon): 97500 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96392 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'KernelStack: 3744 kB' 'PageTables: 8500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.443 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865408 kB' 'MemUsed: 5435744 kB' 'Active: 2222072 kB' 'Inactive: 730984 kB' 'Active(anon): 97304 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124768 kB' 'Inactive(file): 714300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855700 kB' 'Mapped: 25244 kB' 'AnonPages: 96684 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 8500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171104 kB' 'SReclaimable: 122468 kB' 'SUnreclaim: 48636 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:15:52.444 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.445 node0=1024 expecting 1024 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:52.445 ************************************ 00:15:52.445 END TEST default_setup 00:15:52.445 ************************************ 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:52.445 00:15:52.445 real 0m0.433s 00:15:52.445 user 0m0.188s 00:15:52.445 sys 0m0.227s 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.445 11:09:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:15:52.445 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:15:52.445 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.445 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.445 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.445 ************************************ 00:15:52.445 START TEST per_node_1G_alloc 00:15:52.445 ************************************ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.445 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.445 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.445 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7914176 kB' 'MemAvailable: 10576720 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2220776 kB' 'Inactive: 730792 kB' 'Active(anon): 95968 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95772 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7914176 kB' 'MemAvailable: 10576720 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221036 kB' 'Inactive: 730792 kB' 'Active(anon): 96228 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96160 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.445 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7914176 kB' 'MemAvailable: 10576720 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221036 kB' 'Inactive: 730792 kB' 'Active(anon): 96228 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95772 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.446 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.447 nr_hugepages=512 00:15:52.447 resv_hugepages=0 00:15:52.447 surplus_hugepages=0 00:15:52.447 anon_hugepages=8192 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7914132 kB' 'MemAvailable: 10576676 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2220776 kB' 'Inactive: 730792 kB' 'Active(anon): 95968 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95772 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.447 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7914392 kB' 'MemUsed: 4386760 kB' 'Active: 2220776 kB' 'Inactive: 730792 kB' 'Active(anon): 95968 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855808 kB' 'Mapped: 25232 kB' 'AnonPages: 95772 kB' 'Shmem: 16892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:52.448 node0=512 expecting 512 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:52.448 00:15:52.448 real 0m0.257s 00:15:52.448 user 0m0.135s 00:15:52.448 sys 0m0.147s 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.448 11:09:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:15:52.448 ************************************ 00:15:52.448 END TEST per_node_1G_alloc 00:15:52.448 ************************************ 00:15:52.448 11:09:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:15:52.448 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.448 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.448 11:09:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.448 ************************************ 00:15:52.448 START TEST even_2G_alloc 00:15:52.448 ************************************ 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.448 11:09:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.448 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.448 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.448 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.448 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864140 kB' 'MemAvailable: 9526684 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221688 kB' 'Inactive: 730792 kB' 'Active(anon): 96880 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864400 kB' 'MemAvailable: 9526944 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221948 kB' 'Inactive: 730792 kB' 'Active(anon): 97140 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.449 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864056 kB' 'MemAvailable: 9526600 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221948 kB' 'Inactive: 730792 kB' 'Active(anon): 97140 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.450 nr_hugepages=1024 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:52.450 resv_hugepages=0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.450 surplus_hugepages=0 00:15:52.450 anon_hugepages=8192 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.450 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864316 kB' 'MemAvailable: 9526860 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2221688 kB' 'Inactive: 730792 kB' 'Active(anon): 96880 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864288 kB' 'MemUsed: 5436864 kB' 'Active: 2221688 kB' 'Inactive: 730792 kB' 'Active(anon): 96880 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855808 kB' 'Mapped: 25232 kB' 'AnonPages: 96840 kB' 'Shmem: 16892 kB' 'KernelStack: 3728 kB' 'PageTables: 8688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.451 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.452 node0=1024 expecting 1024 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:52.452 00:15:52.452 real 0m0.245s 00:15:52.452 user 0m0.113s 00:15:52.452 sys 0m0.157s 00:15:52.452 ************************************ 00:15:52.452 END TEST even_2G_alloc 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.452 11:09:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:15:52.452 ************************************ 00:15:52.452 11:09:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:15:52.452 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.452 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.452 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.452 ************************************ 00:15:52.452 START TEST odd_alloc 00:15:52.452 ************************************ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.452 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.452 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.452 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6862804 kB' 'MemAvailable: 9525348 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222532 kB' 'Inactive: 730792 kB' 'Active(anon): 97724 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96160 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.452 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6863064 kB' 'MemAvailable: 9525608 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222532 kB' 'Inactive: 730792 kB' 'Active(anon): 97724 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96160 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6863324 kB' 'MemAvailable: 9525868 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222532 kB' 'Inactive: 730792 kB' 'Active(anon): 97724 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95772 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 8008 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.453 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.454 nr_hugepages=1025 00:15:52.454 resv_hugepages=0 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.454 surplus_hugepages=0 00:15:52.454 anon_hugepages=8192 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6863220 kB' 'MemAvailable: 9525764 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222272 kB' 'Inactive: 730792 kB' 'Active(anon): 97464 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96548 kB' 'Mapped: 25232 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.454 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6863128 kB' 'MemUsed: 5438024 kB' 'Active: 2222532 kB' 'Inactive: 730792 kB' 'Active(anon): 97724 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855808 kB' 'Mapped: 25232 kB' 'AnonPages: 96160 kB' 'Shmem: 16892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:15:52.455 node0=1025 expecting 1025 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:15:52.455 00:15:52.455 real 0m0.246s 00:15:52.455 user 0m0.120s 00:15:52.455 sys 0m0.148s 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.455 ************************************ 00:15:52.455 END TEST odd_alloc 00:15:52.455 ************************************ 00:15:52.455 11:09:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:15:52.455 11:09:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:15:52.455 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.455 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.455 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.455 ************************************ 00:15:52.455 START TEST custom_alloc 00:15:52.455 ************************************ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.455 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.455 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.455 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7912880 kB' 'MemAvailable: 10575424 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222404 kB' 'Inactive: 730792 kB' 'Active(anon): 97596 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25620 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.455 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7912880 kB' 'MemAvailable: 10575424 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222404 kB' 'Inactive: 730792 kB' 'Active(anon): 97596 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 97132 kB' 'Mapped: 25620 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.456 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7912680 kB' 'MemAvailable: 10575224 kB' 'Buffers: 2068 kB' 'Cached: 2853740 kB' 'SwapCached: 0 kB' 'Active: 2222208 kB' 'Inactive: 730792 kB' 'Active(anon): 97400 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124808 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96840 kB' 'Mapped: 25620 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3728 kB' 'PageTables: 7328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.457 nr_hugepages=512 00:15:52.457 resv_hugepages=0 00:15:52.457 surplus_hugepages=0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.457 anon_hugepages=8192 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7912896 kB' 'MemAvailable: 10575432 kB' 'Buffers: 2068 kB' 'Cached: 2853732 kB' 'SwapCached: 0 kB' 'Active: 2221732 kB' 'Inactive: 730788 kB' 'Active(anon): 96928 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124804 kB' 'Inactive(file): 714104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96368 kB' 'Mapped: 25564 kB' 'Shmem: 16892 kB' 'Slab: 171388 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48892 kB' 'KernelStack: 3680 kB' 'PageTables: 7656 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.457 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7913316 kB' 'MemUsed: 4387836 kB' 'Active: 2221548 kB' 'Inactive: 730788 kB' 'Active(anon): 96744 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124804 kB' 'Inactive(file): 714104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855800 kB' 'Mapped: 25180 kB' 'AnonPages: 96164 kB' 'Shmem: 16892 kB' 'KernelStack: 3584 kB' 'PageTables: 7776 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171404 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48908 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:15:52.458 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:52.459 node0=512 expecting 512 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:52.459 00:15:52.459 real 0m0.322s 00:15:52.459 user 0m0.142s 00:15:52.459 sys 0m0.205s 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.459 11:09:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:15:52.459 ************************************ 00:15:52.459 END TEST custom_alloc 00:15:52.459 ************************************ 00:15:52.459 11:09:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:15:52.459 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.459 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.459 11:09:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.459 ************************************ 00:15:52.459 START TEST no_shrink_alloc 00:15:52.459 ************************************ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.459 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.459 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.459 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864160 kB' 'MemAvailable: 9526708 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222196 kB' 'Inactive: 730792 kB' 'Active(anon): 97384 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96488 kB' 'Mapped: 24876 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864420 kB' 'MemAvailable: 9526968 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2221936 kB' 'Inactive: 730792 kB' 'Active(anon): 97124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96100 kB' 'Mapped: 24876 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.459 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864680 kB' 'MemAvailable: 9527228 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222196 kB' 'Inactive: 730792 kB' 'Active(anon): 97384 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96488 kB' 'Mapped: 24876 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.460 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.461 nr_hugepages=1024 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:52.461 resv_hugepages=0 00:15:52.461 surplus_hugepages=0 00:15:52.461 anon_hugepages=8192 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:52.461 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864656 kB' 'MemAvailable: 9527204 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2221936 kB' 'Inactive: 730792 kB' 'Active(anon): 97124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96488 kB' 'Mapped: 24876 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 7768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.462 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.463 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864916 kB' 'MemUsed: 5436236 kB' 'Active: 2221936 kB' 'Inactive: 730792 kB' 'Active(anon): 97124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855812 kB' 'Mapped: 24876 kB' 'AnonPages: 96488 kB' 'Shmem: 16892 kB' 'KernelStack: 3680 kB' 'PageTables: 7768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.726 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.727 node0=1024 expecting 1024 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.727 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:52.727 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.727 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:15:52.727 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.727 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865184 kB' 'MemAvailable: 9527732 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222328 kB' 'Inactive: 730792 kB' 'Active(anon): 97516 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96780 kB' 'Mapped: 25264 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.728 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 8192 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=8192 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6864868 kB' 'MemAvailable: 9527416 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222328 kB' 'Inactive: 730792 kB' 'Active(anon): 97516 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 97168 kB' 'Mapped: 25264 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.729 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.730 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865128 kB' 'MemAvailable: 9527676 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222068 kB' 'Inactive: 730792 kB' 'Active(anon): 97256 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96780 kB' 'Mapped: 25264 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.731 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:15:52.732 nr_hugepages=1024 00:15:52.732 resv_hugepages=0 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:52.732 surplus_hugepages=0 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:52.732 anon_hugepages=8192 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865252 kB' 'MemAvailable: 9527800 kB' 'Buffers: 2068 kB' 'Cached: 2853744 kB' 'SwapCached: 0 kB' 'Active: 2222068 kB' 'Inactive: 730792 kB' 'Active(anon): 97256 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96876 kB' 'Mapped: 25264 kB' 'Shmem: 16892 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 353644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 92012 kB' 'DirectMap2M: 5150720 kB' 'DirectMap1G: 9437184 kB' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.732 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.733 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865664 kB' 'MemUsed: 5435488 kB' 'Active: 2222068 kB' 'Inactive: 730792 kB' 'Active(anon): 97256 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2124812 kB' 'Inactive(file): 714108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2855812 kB' 'Mapped: 25264 kB' 'AnonPages: 96488 kB' 'Shmem: 16892 kB' 'KernelStack: 3680 kB' 'PageTables: 8156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171436 kB' 'SReclaimable: 122496 kB' 'SUnreclaim: 48940 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.734 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:52.735 node0=1024 expecting 1024 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:52.735 00:15:52.735 real 0m0.544s 00:15:52.735 user 0m0.269s 00:15:52.735 sys 0m0.329s 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.735 11:09:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:15:52.735 ************************************ 00:15:52.735 END TEST no_shrink_alloc 00:15:52.735 ************************************ 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:15:52.993 11:09:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:15:52.993 00:15:52.993 real 0m2.438s 00:15:52.993 user 0m1.115s 00:15:52.993 sys 0m1.437s 00:15:52.993 11:09:11 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.993 11:09:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:15:52.993 ************************************ 00:15:52.993 END TEST hugepages 00:15:52.993 ************************************ 00:15:52.993 11:09:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:52.993 11:09:11 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.993 11:09:11 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.993 11:09:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:15:52.993 ************************************ 00:15:52.993 START TEST driver 00:15:52.993 ************************************ 00:15:52.993 11:09:11 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:52.993 * Looking for test storage... 00:15:52.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:52.993 11:09:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:15:52.993 11:09:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:52.993 11:09:11 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:53.251 11:09:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:15:53.251 11:09:11 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.251 11:09:11 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.251 11:09:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:15:53.251 ************************************ 00:15:53.251 START TEST guess_driver 00:15:53.251 ************************************ 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio.ko.xz 00:15:53.251 insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:15:53.251 Looking for driver=uio_pci_generic 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:15:53.251 11:09:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:53.251 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:53.510 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:15:53.510 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:15:53.510 11:09:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:53.510 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:53.510 11:09:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:53.768 00:15:53.768 real 0m0.609s 00:15:53.768 user 0m0.229s 00:15:53.768 sys 0m0.364s 00:15:53.768 11:09:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.768 11:09:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 ************************************ 00:15:53.768 END TEST guess_driver 00:15:53.768 ************************************ 00:15:54.025 00:15:54.025 real 0m0.998s 00:15:54.025 user 0m0.366s 00:15:54.025 sys 0m0.608s 00:15:54.025 11:09:12 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.025 11:09:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:15:54.025 ************************************ 00:15:54.025 END TEST driver 00:15:54.025 ************************************ 00:15:54.025 11:09:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:54.025 11:09:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.025 11:09:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.025 11:09:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:15:54.025 ************************************ 00:15:54.025 START TEST devices 00:15:54.025 ************************************ 00:15:54.025 11:09:12 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:54.025 * Looking for test storage... 00:15:54.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:54.025 11:09:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:15:54.025 11:09:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:15:54.025 11:09:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:54.025 11:09:12 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # return 1 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:54.283 11:09:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:54.283 No valid GPT data, bailing 00:15:54.283 11:09:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:15:54.283 11:09:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:54.283 11:09:12 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.283 11:09:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:15:54.283 ************************************ 00:15:54.283 START TEST nvme_mount 00:15:54.283 ************************************ 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:54.283 11:09:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:15:55.655 Creating new GPT entries. 00:15:55.655 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:55.655 other utilities. 00:15:55.655 11:09:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:15:55.655 11:09:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:55.655 11:09:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:55.655 11:09:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:55.655 11:09:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:56.587 Creating new GPT entries. 00:15:56.587 The operation has completed successfully. 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 41048 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:15:56.587 11:09:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:15:56.587 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:56.587 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:56.845 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:56.845 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:56.845 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:56.845 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:56.845 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:56.845 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:15:56.845 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:57.103 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.103 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:15:57.103 11:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:57.361 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:15:57.361 11:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:57.618 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:57.618 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:57.618 00:15:57.618 real 0m3.147s 00:15:57.618 user 0m0.402s 00:15:57.618 sys 0m0.619s 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:57.618 11:09:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:15:57.618 ************************************ 00:15:57.618 END TEST nvme_mount 00:15:57.618 ************************************ 00:15:57.618 11:09:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:15:57.618 11:09:16 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:57.618 11:09:16 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:57.618 11:09:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:15:57.618 ************************************ 00:15:57.618 START TEST dm_mount 00:15:57.618 ************************************ 00:15:57.618 11:09:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:15:57.618 11:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:15:57.618 11:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:15:57.618 11:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:57.619 11:09:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:15:58.551 Creating new GPT entries. 00:15:58.551 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:58.551 other utilities. 00:15:58.551 11:09:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:15:58.551 11:09:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:58.551 11:09:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:58.551 11:09:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:58.551 11:09:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:59.924 Creating new GPT entries. 00:15:59.924 The operation has completed successfully. 00:15:59.924 11:09:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:15:59.924 11:09:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:59.924 11:09:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:59.924 11:09:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:59.924 11:09:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:16:00.857 The operation has completed successfully. 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 41367 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:00.857 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:00.857 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:16:00.858 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:16:00.858 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:01.116 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:01.116 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:16:01.116 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:01.399 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:16:01.399 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:16:01.399 00:16:01.399 real 0m3.775s 00:16:01.399 user 0m0.244s 00:16:01.399 sys 0m0.431s 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.399 ************************************ 00:16:01.399 END TEST dm_mount 00:16:01.399 ************************************ 00:16:01.399 11:09:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:16:01.399 11:09:19 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:16:01.399 11:09:19 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:16:01.399 11:09:19 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:01.399 11:09:19 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:01.399 11:09:19 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:01.400 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:16:01.400 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:16:01.400 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:01.400 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:16:01.400 11:09:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:16:01.400 00:16:01.400 real 0m7.476s 00:16:01.400 user 0m0.914s 00:16:01.400 sys 0m1.328s 00:16:01.400 ************************************ 00:16:01.400 END TEST devices 00:16:01.400 ************************************ 00:16:01.400 11:09:19 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.400 11:09:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 00:16:01.400 real 0m13.074s 00:16:01.400 user 0m3.330s 00:16:01.400 sys 0m4.653s 00:16:01.400 11:09:19 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.400 11:09:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 ************************************ 00:16:01.400 END TEST setup.sh 00:16:01.400 ************************************ 00:16:01.400 11:09:20 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:01.400 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:01.657 Hugepages 00:16:01.657 node hugesize free / total 00:16:01.657 node0 1048576kB 0 / 0 00:16:01.657 node0 2048kB 2048 / 2048 00:16:01.657 00:16:01.657 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:01.657 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:01.657 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:16:01.657 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.657 11:09:20 -- spdk/autotest.sh@130 -- # uname -s 00:16:01.657 11:09:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:16:01.657 11:09:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:16:01.657 11:09:20 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:01.914 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:01.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:02.172 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:02.172 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:02.172 11:09:20 -- common/autotest_common.sh@1528 -- # sleep 1 00:16:03.123 11:09:21 -- common/autotest_common.sh@1529 -- # bdfs=() 00:16:03.123 11:09:21 -- common/autotest_common.sh@1529 -- # local bdfs 00:16:03.123 11:09:21 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:16:03.123 11:09:21 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:16:03.123 11:09:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:03.123 11:09:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:16:03.123 11:09:21 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:03.123 11:09:21 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:03.123 11:09:21 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:16:03.123 11:09:21 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:16:03.123 11:09:21 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:16:03.123 11:09:21 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:03.123 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:03.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:03.381 Waiting for block devices as requested 00:16:03.381 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:03.381 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:03.381 11:09:21 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:16:03.381 11:09:21 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:16:03.381 11:09:21 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:16:03.381 11:09:21 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:16:03.381 11:09:21 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:16:03.381 11:09:21 -- common/autotest_common.sh@1541 -- # grep oacs 00:16:03.381 11:09:21 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:16:03.381 11:09:22 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:16:03.381 11:09:22 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:16:03.381 11:09:22 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:16:03.381 11:09:22 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:16:03.381 11:09:22 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:16:03.381 11:09:22 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:16:03.381 11:09:22 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:16:03.381 11:09:22 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:16:03.381 11:09:22 -- common/autotest_common.sh@1553 -- # continue 00:16:03.381 11:09:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:16:03.381 11:09:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.381 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.640 11:09:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:16:03.640 11:09:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:03.640 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.640 11:09:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:03.640 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:03.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:03.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.899 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:16:03.899 11:09:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:16:03.899 11:09:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.899 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.899 11:09:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:16:03.899 11:09:22 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:16:03.899 11:09:22 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:16:03.899 11:09:22 -- common/autotest_common.sh@1573 -- # bdfs=() 00:16:03.899 11:09:22 -- common/autotest_common.sh@1573 -- # local bdfs 00:16:03.899 11:09:22 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:16:03.899 11:09:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:03.899 11:09:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:16:03.899 11:09:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:03.899 11:09:22 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:03.899 11:09:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:16:03.899 11:09:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:16:03.899 11:09:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:16:03.899 11:09:22 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:16:03.899 11:09:22 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:16:03.899 11:09:22 -- common/autotest_common.sh@1576 -- # device=0x0010 00:16:03.899 11:09:22 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:03.899 11:09:22 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:16:03.899 11:09:22 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:16:03.899 11:09:22 -- common/autotest_common.sh@1589 -- # return 0 00:16:03.899 11:09:22 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:16:03.899 11:09:22 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:16:03.899 11:09:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:03.899 11:09:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:03.899 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.899 ************************************ 00:16:03.899 START TEST unittest 00:16:03.899 ************************************ 00:16:03.899 11:09:22 unittest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:16:04.159 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:16:04.159 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:16:04.159 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:16:04.159 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:16:04.159 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:16:04.159 + rootdir=/home/vagrant/spdk_repo/spdk 00:16:04.159 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:04.159 ++ rpc_py=rpc_cmd 00:16:04.159 ++ set -e 00:16:04.159 ++ shopt -s nullglob 00:16:04.159 ++ shopt -s extglob 00:16:04.159 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:04.159 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:04.159 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:04.159 +++ CONFIG_RDMA=y 00:16:04.159 +++ CONFIG_UNIT_TESTS=y 00:16:04.159 +++ CONFIG_GOLANG=n 00:16:04.159 +++ CONFIG_FUSE=n 00:16:04.159 +++ CONFIG_ISAL=n 00:16:04.159 +++ CONFIG_VTUNE_DIR= 00:16:04.159 +++ CONFIG_CUSTOMOCF=n 00:16:04.159 +++ CONFIG_IPSEC_MB_DIR= 00:16:04.159 +++ CONFIG_VBDEV_COMPRESS=n 00:16:04.159 +++ CONFIG_OCF_PATH= 00:16:04.159 +++ CONFIG_SHARED=n 00:16:04.159 +++ CONFIG_DPDK_LIB_DIR= 00:16:04.159 +++ CONFIG_PGO_DIR= 00:16:04.159 +++ CONFIG_TESTS=y 00:16:04.159 +++ CONFIG_APPS=y 00:16:04.159 +++ CONFIG_ISAL_CRYPTO=n 00:16:04.159 +++ CONFIG_LIBDIR= 00:16:04.159 +++ CONFIG_DPDK_COMPRESSDEV=n 00:16:04.159 +++ CONFIG_DAOS_DIR= 00:16:04.159 +++ CONFIG_ISCSI_INITIATOR=n 00:16:04.159 +++ CONFIG_DPDK_PKG_CONFIG=n 00:16:04.159 +++ CONFIG_ASAN=y 00:16:04.159 +++ CONFIG_LTO=n 00:16:04.159 +++ CONFIG_CET=n 00:16:04.159 +++ CONFIG_FUZZER=n 00:16:04.159 +++ CONFIG_USDT=n 00:16:04.159 +++ CONFIG_VTUNE=n 00:16:04.159 +++ CONFIG_VHOST=y 00:16:04.159 +++ CONFIG_WPDK_DIR= 00:16:04.159 +++ CONFIG_UBLK=n 00:16:04.159 +++ CONFIG_URING=n 00:16:04.159 +++ CONFIG_SMA=n 00:16:04.159 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:04.159 +++ CONFIG_IDXD_KERNEL=n 00:16:04.159 +++ CONFIG_FC_PATH= 00:16:04.159 +++ CONFIG_PREFIX=/usr/local 00:16:04.159 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:16:04.159 +++ CONFIG_XNVME=n 00:16:04.159 +++ CONFIG_RDMA_PROV=verbs 00:16:04.159 +++ CONFIG_RDMA_SET_TOS=y 00:16:04.159 +++ CONFIG_FUZZER_LIB= 00:16:04.159 +++ CONFIG_HAVE_LIBARCHIVE=n 00:16:04.159 +++ CONFIG_ARCH=native 00:16:04.159 +++ CONFIG_PGO_CAPTURE=n 00:16:04.159 +++ CONFIG_DAOS=y 00:16:04.159 +++ CONFIG_WERROR=y 00:16:04.159 +++ CONFIG_DEBUG=y 00:16:04.159 +++ CONFIG_AVAHI=n 00:16:04.159 +++ CONFIG_CROSS_PREFIX= 00:16:04.159 +++ CONFIG_HAVE_KEYUTILS=n 00:16:04.159 +++ CONFIG_PGO_USE=n 00:16:04.159 +++ CONFIG_CRYPTO=n 00:16:04.159 +++ CONFIG_HAVE_ARC4RANDOM=n 00:16:04.159 +++ CONFIG_OPENSSL_PATH= 00:16:04.159 +++ CONFIG_EXAMPLES=y 00:16:04.159 +++ CONFIG_DPDK_INC_DIR= 00:16:04.159 +++ CONFIG_HAVE_EVP_MAC=n 00:16:04.159 +++ CONFIG_MAX_LCORES= 00:16:04.159 +++ CONFIG_VIRTIO=y 00:16:04.159 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:04.159 +++ CONFIG_IPSEC_MB=n 00:16:04.159 +++ CONFIG_UBSAN=n 00:16:04.159 +++ CONFIG_HAVE_EXECINFO_H=y 00:16:04.159 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:04.159 +++ CONFIG_HAVE_LIBBSD=n 00:16:04.159 +++ CONFIG_URING_PATH= 00:16:04.159 +++ CONFIG_NVME_CUSE=y 00:16:04.159 +++ CONFIG_URING_ZNS=n 00:16:04.159 +++ CONFIG_VFIO_USER=n 00:16:04.159 +++ CONFIG_FC=n 00:16:04.159 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:16:04.159 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:04.159 +++ CONFIG_RBD=n 00:16:04.159 +++ CONFIG_RAID5F=n 00:16:04.159 +++ CONFIG_VFIO_USER_DIR= 00:16:04.159 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:04.159 +++ CONFIG_TSAN=n 00:16:04.159 +++ CONFIG_IDXD=y 00:16:04.159 +++ CONFIG_DPDK_UADK=n 00:16:04.159 +++ CONFIG_OCF=n 00:16:04.159 +++ CONFIG_CRYPTO_MLX5=n 00:16:04.159 +++ CONFIG_FIO_PLUGIN=y 00:16:04.159 +++ CONFIG_COVERAGE=y 00:16:04.159 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:04.159 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:04.159 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:04.159 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:04.159 +++ _root=/home/vagrant/spdk_repo/spdk 00:16:04.159 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:04.159 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:04.159 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:04.159 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:04.159 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:04.159 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:04.159 +++ VHOST_APP=("$_app_dir/vhost") 00:16:04.159 +++ DD_APP=("$_app_dir/spdk_dd") 00:16:04.159 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:16:04.159 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:04.159 +++ [[ #ifndef SPDK_CONFIG_H 00:16:04.159 #define SPDK_CONFIG_H 00:16:04.159 #define SPDK_CONFIG_APPS 1 00:16:04.159 #define SPDK_CONFIG_ARCH native 00:16:04.159 #define SPDK_CONFIG_ASAN 1 00:16:04.159 #undef SPDK_CONFIG_AVAHI 00:16:04.159 #undef SPDK_CONFIG_CET 00:16:04.159 #define SPDK_CONFIG_COVERAGE 1 00:16:04.159 #define SPDK_CONFIG_CROSS_PREFIX 00:16:04.159 #undef SPDK_CONFIG_CRYPTO 00:16:04.159 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:04.159 #undef SPDK_CONFIG_CUSTOMOCF 00:16:04.159 #define SPDK_CONFIG_DAOS 1 00:16:04.159 #define SPDK_CONFIG_DAOS_DIR 00:16:04.159 #define SPDK_CONFIG_DEBUG 1 00:16:04.159 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:04.159 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:04.159 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:04.159 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:04.159 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:04.159 #undef SPDK_CONFIG_DPDK_UADK 00:16:04.159 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:04.159 #define SPDK_CONFIG_EXAMPLES 1 00:16:04.159 #undef SPDK_CONFIG_FC 00:16:04.159 #define SPDK_CONFIG_FC_PATH 00:16:04.159 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:04.159 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:04.159 #undef SPDK_CONFIG_FUSE 00:16:04.159 #undef SPDK_CONFIG_FUZZER 00:16:04.159 #define SPDK_CONFIG_FUZZER_LIB 00:16:04.159 #undef SPDK_CONFIG_GOLANG 00:16:04.159 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:16:04.159 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:16:04.159 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:04.159 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:16:04.159 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:04.159 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:04.159 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:16:04.159 #define SPDK_CONFIG_IDXD 1 00:16:04.159 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:04.159 #undef SPDK_CONFIG_IPSEC_MB 00:16:04.159 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:04.159 #undef SPDK_CONFIG_ISAL 00:16:04.159 #undef SPDK_CONFIG_ISAL_CRYPTO 00:16:04.159 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:16:04.159 #define SPDK_CONFIG_LIBDIR 00:16:04.159 #undef SPDK_CONFIG_LTO 00:16:04.159 #define SPDK_CONFIG_MAX_LCORES 00:16:04.159 #define SPDK_CONFIG_NVME_CUSE 1 00:16:04.159 #undef SPDK_CONFIG_OCF 00:16:04.159 #define SPDK_CONFIG_OCF_PATH 00:16:04.159 #define SPDK_CONFIG_OPENSSL_PATH 00:16:04.159 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:04.159 #define SPDK_CONFIG_PGO_DIR 00:16:04.159 #undef SPDK_CONFIG_PGO_USE 00:16:04.159 #define SPDK_CONFIG_PREFIX /usr/local 00:16:04.159 #undef SPDK_CONFIG_RAID5F 00:16:04.159 #undef SPDK_CONFIG_RBD 00:16:04.159 #define SPDK_CONFIG_RDMA 1 00:16:04.159 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:04.159 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:04.159 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:16:04.159 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:04.159 #undef SPDK_CONFIG_SHARED 00:16:04.159 #undef SPDK_CONFIG_SMA 00:16:04.159 #define SPDK_CONFIG_TESTS 1 00:16:04.159 #undef SPDK_CONFIG_TSAN 00:16:04.159 #undef SPDK_CONFIG_UBLK 00:16:04.159 #undef SPDK_CONFIG_UBSAN 00:16:04.159 #define SPDK_CONFIG_UNIT_TESTS 1 00:16:04.159 #undef SPDK_CONFIG_URING 00:16:04.159 #define SPDK_CONFIG_URING_PATH 00:16:04.159 #undef SPDK_CONFIG_URING_ZNS 00:16:04.159 #undef SPDK_CONFIG_USDT 00:16:04.159 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:04.159 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:04.159 #undef SPDK_CONFIG_VFIO_USER 00:16:04.159 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:04.159 #define SPDK_CONFIG_VHOST 1 00:16:04.159 #define SPDK_CONFIG_VIRTIO 1 00:16:04.159 #undef SPDK_CONFIG_VTUNE 00:16:04.159 #define SPDK_CONFIG_VTUNE_DIR 00:16:04.159 #define SPDK_CONFIG_WERROR 1 00:16:04.159 #define SPDK_CONFIG_WPDK_DIR 00:16:04.159 #undef SPDK_CONFIG_XNVME 00:16:04.159 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:04.159 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:04.159 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.159 +++ [[ -e /bin/wpdk_common.sh ]] 00:16:04.159 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.159 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.159 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:04.159 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:04.159 ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:04.159 ++++ export PATH 00:16:04.159 ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:04.159 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:04.159 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:04.159 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:04.159 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:04.159 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:04.159 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:04.159 +++ TEST_TAG=N/A 00:16:04.159 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:04.159 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:04.159 ++++ uname -s 00:16:04.159 +++ PM_OS=Linux 00:16:04.159 +++ MONITOR_RESOURCES_SUDO=() 00:16:04.159 +++ declare -A MONITOR_RESOURCES_SUDO 00:16:04.159 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:04.159 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:04.159 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:04.159 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:04.159 +++ SUDO[0]= 00:16:04.159 +++ SUDO[1]='sudo -E' 00:16:04.159 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:04.159 +++ [[ Linux == FreeBSD ]] 00:16:04.159 +++ [[ Linux == Linux ]] 00:16:04.159 +++ [[ QEMU != QEMU ]] 00:16:04.159 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:04.159 ++ : 0 00:16:04.159 ++ export RUN_NIGHTLY 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_RUN_VALGRIND 00:16:04.159 ++ : 1 00:16:04.159 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:16:04.159 ++ : 1 00:16:04.159 ++ export SPDK_TEST_UNITTEST 00:16:04.159 ++ : 00:16:04.159 ++ export SPDK_TEST_AUTOBUILD 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_RELEASE_BUILD 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_ISAL 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_ISCSI 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_ISCSI_INITIATOR 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME_PMR 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME_BP 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME_CLI 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME_CUSE 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVME_FDP 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVMF 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VFIOUSER 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VFIOUSER_QEMU 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_FUZZER 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_FUZZER_SHORT 00:16:04.159 ++ : rdma 00:16:04.159 ++ export SPDK_TEST_NVMF_TRANSPORT 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_RBD 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VHOST 00:16:04.159 ++ : 1 00:16:04.159 ++ export SPDK_TEST_BLOCKDEV 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_IOAT 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_BLOBFS 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VHOST_INIT 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_LVOL 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VBDEV_COMPRESS 00:16:04.159 ++ : 1 00:16:04.159 ++ export SPDK_RUN_ASAN 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_RUN_UBSAN 00:16:04.159 ++ : 00:16:04.159 ++ export SPDK_RUN_EXTERNAL_DPDK 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_RUN_NON_ROOT 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_CRYPTO 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_FTL 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_OCF 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_VMD 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_OPAL 00:16:04.159 ++ : 00:16:04.159 ++ export SPDK_TEST_NATIVE_DPDK 00:16:04.159 ++ : true 00:16:04.159 ++ export SPDK_AUTOTEST_X 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_RAID5 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_URING 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_USDT 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_USE_IGB_UIO 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_SCHEDULER 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_SCANBUILD 00:16:04.159 ++ : 00:16:04.159 ++ export SPDK_TEST_NVMF_NICS 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_SMA 00:16:04.159 ++ : 1 00:16:04.159 ++ export SPDK_TEST_DAOS 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_XNVME 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_ACCEL_DSA 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_ACCEL_IAA 00:16:04.159 ++ : 00:16:04.159 ++ export SPDK_TEST_FUZZER_TARGET 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_TEST_NVMF_MDNS 00:16:04.159 ++ : 0 00:16:04.159 ++ export SPDK_JSONRPC_GO_CLIENT 00:16:04.159 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:04.159 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:04.159 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:04.159 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:04.159 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:04.159 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:04.159 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:04.159 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:04.159 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:04.159 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:16:04.159 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:04.159 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:04.159 ++ export PYTHONDONTWRITEBYTECODE=1 00:16:04.159 ++ PYTHONDONTWRITEBYTECODE=1 00:16:04.159 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:04.159 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:04.159 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:04.159 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:04.159 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:16:04.159 ++ rm -rf /var/tmp/asan_suppression_file 00:16:04.159 ++ cat 00:16:04.159 ++ echo leak:libfuse3.so 00:16:04.159 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:04.159 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:04.159 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:04.159 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:04.159 ++ '[' -z /var/spdk/dependencies ']' 00:16:04.159 ++ export DEPENDENCY_DIR 00:16:04.159 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:04.159 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:04.159 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:04.159 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:04.159 ++ export QEMU_BIN= 00:16:04.159 ++ QEMU_BIN= 00:16:04.160 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:04.160 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:04.160 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:04.160 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:04.160 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:04.160 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:04.160 ++ '[' 0 -eq 0 ']' 00:16:04.160 ++ export valgrind= 00:16:04.160 ++ valgrind= 00:16:04.160 +++ uname -s 00:16:04.160 ++ '[' Linux = Linux ']' 00:16:04.160 ++ HUGEMEM=4096 00:16:04.160 ++ export CLEAR_HUGE=yes 00:16:04.160 ++ CLEAR_HUGE=yes 00:16:04.160 ++ [[ 0 -eq 1 ]] 00:16:04.160 ++ [[ 0 -eq 1 ]] 00:16:04.160 ++ MAKE=make 00:16:04.160 +++ nproc 00:16:04.160 ++ MAKEFLAGS=-j10 00:16:04.160 ++ export HUGEMEM=4096 00:16:04.160 ++ HUGEMEM=4096 00:16:04.160 ++ NO_HUGE=() 00:16:04.160 ++ TEST_MODE= 00:16:04.160 ++ [[ -z '' ]] 00:16:04.160 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:16:04.160 ++ exec 00:16:04.160 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:16:04.160 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:16:04.160 ++ set_test_storage 2147483648 00:16:04.160 ++ [[ -v testdir ]] 00:16:04.160 ++ local requested_size=2147483648 00:16:04.160 ++ local mount target_dir 00:16:04.160 ++ local -A mounts fss sizes avails uses 00:16:04.160 ++ local source fs size avail mount use 00:16:04.160 ++ local storage_fallback storage_candidates 00:16:04.160 +++ mktemp -udt spdk.XXXXXX 00:16:04.160 ++ storage_fallback=/tmp/spdk.uoSUw5 00:16:04.160 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:04.160 ++ [[ -n '' ]] 00:16:04.160 ++ [[ -n '' ]] 00:16:04.160 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.uoSUw5/tests/unit /tmp/spdk.uoSUw5 00:16:04.160 ++ requested_size=2214592512 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 +++ df -T 00:16:04.160 +++ grep -v Filesystem 00:16:04.160 ++ mounts["$mount"]=devtmpfs 00:16:04.160 ++ fss["$mount"]=devtmpfs 00:16:04.160 ++ avails["$mount"]=6267637760 00:16:04.160 ++ sizes["$mount"]=6267637760 00:16:04.160 ++ uses["$mount"]=0 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=tmpfs 00:16:04.160 ++ fss["$mount"]=tmpfs 00:16:04.160 ++ avails["$mount"]=6298189824 00:16:04.160 ++ sizes["$mount"]=6298189824 00:16:04.160 ++ uses["$mount"]=0 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=tmpfs 00:16:04.160 ++ fss["$mount"]=tmpfs 00:16:04.160 ++ avails["$mount"]=6280888320 00:16:04.160 ++ sizes["$mount"]=6298189824 00:16:04.160 ++ uses["$mount"]=17301504 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=tmpfs 00:16:04.160 ++ fss["$mount"]=tmpfs 00:16:04.160 ++ avails["$mount"]=6298189824 00:16:04.160 ++ sizes["$mount"]=6298189824 00:16:04.160 ++ uses["$mount"]=0 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=/dev/vda1 00:16:04.160 ++ fss["$mount"]=xfs 00:16:04.160 ++ avails["$mount"]=14339424256 00:16:04.160 ++ sizes["$mount"]=21463302144 00:16:04.160 ++ uses["$mount"]=7123877888 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=tmpfs 00:16:04.160 ++ fss["$mount"]=tmpfs 00:16:04.160 ++ avails["$mount"]=1259638784 00:16:04.160 ++ sizes["$mount"]=1259638784 00:16:04.160 ++ uses["$mount"]=0 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:16:04.160 ++ fss["$mount"]=fuse.sshfs 00:16:04.160 ++ avails["$mount"]=93582106624 00:16:04.160 ++ sizes["$mount"]=105088212992 00:16:04.160 ++ uses["$mount"]=6120673280 00:16:04.160 ++ read -r source fs size use avail _ mount 00:16:04.160 ++ printf '* Looking for test storage...\n' 00:16:04.160 * Looking for test storage... 00:16:04.160 ++ local target_space new_size 00:16:04.160 ++ for target_dir in "${storage_candidates[@]}" 00:16:04.160 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:16:04.160 +++ awk '$1 !~ /Filesystem/{print $6}' 00:16:04.160 ++ mount=/ 00:16:04.160 ++ target_space=14339424256 00:16:04.160 ++ (( target_space == 0 || target_space < requested_size )) 00:16:04.160 ++ (( target_space >= requested_size )) 00:16:04.160 ++ [[ xfs == tmpfs ]] 00:16:04.160 ++ [[ xfs == ramfs ]] 00:16:04.160 ++ [[ / == / ]] 00:16:04.160 ++ new_size=9338470400 00:16:04.160 ++ (( new_size * 100 / sizes[/] > 95 )) 00:16:04.160 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:16:04.160 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:16:04.160 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:16:04.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:16:04.160 ++ return 0 00:16:04.160 ++ set -o errtrace 00:16:04.160 ++ shopt -s extdebug 00:16:04.160 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:16:04.160 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@1683 -- # true 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@29 -- # exec 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:04.160 11:09:22 unittest -- common/autotest_common.sh@18 -- # set -x 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@158 -- # '[' -z x ']' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@179 -- # hash lcov 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@180 -- # cov_avail=yes 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:16:04.160 --rc lcov_branch_coverage=1 00:16:04.160 --rc lcov_function_coverage=1 00:16:04.160 --rc genhtml_branch_coverage=1 00:16:04.160 --rc genhtml_function_coverage=1 00:16:04.160 --rc genhtml_legend=1 00:16:04.160 --rc geninfo_all_blocks=1 00:16:04.160 ' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:16:04.160 --rc lcov_branch_coverage=1 00:16:04.160 --rc lcov_function_coverage=1 00:16:04.160 --rc genhtml_branch_coverage=1 00:16:04.160 --rc genhtml_function_coverage=1 00:16:04.160 --rc genhtml_legend=1 00:16:04.160 --rc geninfo_all_blocks=1 00:16:04.160 ' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:16:04.160 --rc lcov_branch_coverage=1 00:16:04.160 --rc lcov_function_coverage=1 00:16:04.160 --rc genhtml_branch_coverage=1 00:16:04.160 --rc genhtml_function_coverage=1 00:16:04.160 --rc genhtml_legend=1 00:16:04.160 --rc geninfo_all_blocks=1 00:16:04.160 --no-external' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@200 -- # LCOV='lcov 00:16:04.160 --rc lcov_branch_coverage=1 00:16:04.160 --rc lcov_function_coverage=1 00:16:04.160 --rc genhtml_branch_coverage=1 00:16:04.160 --rc genhtml_function_coverage=1 00:16:04.160 --rc genhtml_legend=1 00:16:04.160 --rc geninfo_all_blocks=1 00:16:04.160 --no-external' 00:16:04.160 11:09:22 unittest -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:16:12.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:16:12.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:16:12.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:16:12.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:16:12.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:16:12.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:16:30.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:16:30.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:16:30.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:16:30.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:16:30.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:16:30.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:16:30.347 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:16:30.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:17:17.068 11:10:31 unittest -- unit/unittest.sh@206 -- # uname -m 00:17:17.068 11:10:31 unittest -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:17:17.068 11:10:31 unittest -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 ************************************ 00:17:17.068 START TEST unittest_pci_event 00:17:17.068 ************************************ 00:17:17.068 11:10:31 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:17:17.068 00:17:17.068 00:17:17.068 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.068 http://cunit.sourceforge.net/ 00:17:17.068 00:17:17.068 00:17:17.068 Suite: pci_event 00:17:17.068 Test: test_pci_parse_event ...passed 00:17:17.068 00:17:17.068 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.068 suites 1 1 n/a 0 0 00:17:17.068 tests 1 1 1 0 0 00:17:17.068 asserts 15 15 15 0 n/a 00:17:17.068 00:17:17.068 Elapsed time = 0.000 seconds 00:17:17.068 [2024-05-15 11:10:31.655127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:17:17.068 [2024-05-15 11:10:31.655413] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:17:17.068 00:17:17.068 real 0m0.022s 00:17:17.068 user 0m0.012s 00:17:17.068 sys 0m0.010s 00:17:17.068 11:10:31 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.068 11:10:31 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 ************************************ 00:17:17.068 END TEST unittest_pci_event 00:17:17.068 ************************************ 00:17:17.068 11:10:31 unittest -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 ************************************ 00:17:17.068 START TEST unittest_include 00:17:17.068 ************************************ 00:17:17.068 11:10:31 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:17:17.068 00:17:17.068 00:17:17.068 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.068 http://cunit.sourceforge.net/ 00:17:17.068 00:17:17.068 00:17:17.068 Suite: histogram 00:17:17.068 Test: histogram_test ...passed 00:17:17.068 Test: histogram_merge ...passed 00:17:17.068 00:17:17.068 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.068 suites 1 1 n/a 0 0 00:17:17.068 tests 2 2 2 0 0 00:17:17.068 asserts 50 50 50 0 n/a 00:17:17.068 00:17:17.068 Elapsed time = 0.000 seconds 00:17:17.068 ************************************ 00:17:17.068 END TEST unittest_include 00:17:17.068 ************************************ 00:17:17.068 00:17:17.068 real 0m0.021s 00:17:17.068 user 0m0.013s 00:17:17.068 sys 0m0.008s 00:17:17.068 11:10:31 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.068 11:10:31 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 11:10:31 unittest -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.068 11:10:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 ************************************ 00:17:17.068 START TEST unittest_bdev 00:17:17.068 ************************************ 00:17:17.068 11:10:31 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:17:17.068 11:10:31 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:17:17.068 00:17:17.068 00:17:17.068 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.068 http://cunit.sourceforge.net/ 00:17:17.068 00:17:17.068 00:17:17.068 Suite: bdev 00:17:17.068 Test: bytes_to_blocks_test ...passed 00:17:17.068 Test: num_blocks_test ...passed 00:17:17.068 Test: io_valid_test ...passed 00:17:17.068 Test: open_write_test ...[2024-05-15 11:10:31.829150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:17:17.068 [2024-05-15 11:10:31.829404] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:17:17.068 [2024-05-15 11:10:31.829487] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:17:17.068 passed 00:17:17.068 Test: claim_test ...passed 00:17:17.068 Test: alias_add_del_test ...[2024-05-15 11:10:31.931228] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:17:17.068 [2024-05-15 11:10:31.931374] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4605:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:17:17.068 [2024-05-15 11:10:31.931432] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:17:17.068 passed 00:17:17.068 Test: get_device_stat_test ...passed 00:17:17.068 Test: bdev_io_types_test ...passed 00:17:17.068 Test: bdev_io_wait_test ...passed 00:17:17.068 Test: bdev_io_spans_split_test ...passed 00:17:17.068 Test: bdev_io_boundary_split_test ...passed 00:17:17.068 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-15 11:10:32.138052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:17:17.068 passed 00:17:17.068 Test: bdev_io_mix_split_test ...passed 00:17:17.068 Test: bdev_io_split_with_io_wait ...passed 00:17:17.068 Test: bdev_io_write_unit_split_test ...[2024-05-15 11:10:32.307656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:17:17.068 [2024-05-15 11:10:32.307770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:17:17.068 [2024-05-15 11:10:32.308065] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:17:17.068 [2024-05-15 11:10:32.308148] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:17:17.068 passed 00:17:17.068 Test: bdev_io_alignment_with_boundary ...passed 00:17:17.068 Test: bdev_io_alignment ...passed 00:17:17.068 Test: bdev_histograms ...passed 00:17:17.068 Test: bdev_write_zeroes ...passed 00:17:17.068 Test: bdev_compare_and_write ...passed 00:17:17.068 Test: bdev_compare ...passed 00:17:17.068 Test: bdev_compare_emulated ...passed 00:17:17.068 Test: bdev_zcopy_write ...passed 00:17:17.068 Test: bdev_zcopy_read ...passed 00:17:17.068 Test: bdev_open_while_hotremove ...passed 00:17:17.068 Test: bdev_close_while_hotremove ...passed 00:17:17.068 Test: bdev_open_ext_test ...passed 00:17:17.068 Test: bdev_open_ext_unregister ...[2024-05-15 11:10:32.841474] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:17:17.068 [2024-05-15 11:10:32.841659] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:17:17.068 passed 00:17:17.068 Test: bdev_set_io_timeout ...passed 00:17:17.068 Test: bdev_set_qd_sampling ...passed 00:17:17.068 Test: lba_range_overlap ...passed 00:17:17.068 Test: lock_lba_range_check_ranges ...passed 00:17:17.068 Test: lock_lba_range_with_io_outstanding ...passed 00:17:17.068 Test: lock_lba_range_overlapped ...passed 00:17:17.068 Test: bdev_quiesce ...[2024-05-15 11:10:33.047844] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10059:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:17:17.068 passed 00:17:17.068 Test: bdev_io_abort ...passed 00:17:17.068 Test: bdev_unmap ...passed 00:17:17.068 Test: bdev_write_zeroes_split_test ...passed 00:17:17.068 Test: bdev_set_options_test ...passed 00:17:17.068 Test: bdev_get_memory_domains ...passed 00:17:17.068 Test: bdev_io_ext ...[2024-05-15 11:10:33.188645] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:17:17.068 passed 00:17:17.068 Test: bdev_io_ext_no_opts ...passed 00:17:17.068 Test: bdev_io_ext_invalid_opts ...passed 00:17:17.068 Test: bdev_io_ext_split ...passed 00:17:17.068 Test: bdev_io_ext_bounce_buffer ...passed 00:17:17.068 Test: bdev_register_uuid_alias ...[2024-05-15 11:10:33.391023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name b62cb581-04b7-4fcf-992c-d9f9e35d2299 already exists 00:17:17.068 [2024-05-15 11:10:33.391098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:b62cb581-04b7-4fcf-992c-d9f9e35d2299 alias for bdev bdev0 00:17:17.068 passed 00:17:17.068 Test: bdev_unregister_by_name ...[2024-05-15 11:10:33.410396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7926:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:17:17.068 passed 00:17:17.068 Test: for_each_bdev_test ...[2024-05-15 11:10:33.410445] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7934:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:17:17.068 passed 00:17:17.068 Test: bdev_seek_test ...passed 00:17:17.068 Test: bdev_copy ...passed 00:17:17.068 Test: bdev_copy_split_test ...passed 00:17:17.068 Test: examine_locks ...passed 00:17:17.068 Test: claim_v2_rwo ...passed 00:17:17.068 Test: claim_v2_rom ...[2024-05-15 11:10:33.522483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522547] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522568] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522620] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522639] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522680] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8655:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:17:17.069 [2024-05-15 11:10:33.522795] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:17:17.069 passed 00:17:17.069 Test: claim_v2_rwm ...[2024-05-15 11:10:33.522863] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522887] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.522948] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:17:17.069 [2024-05-15 11:10:33.522983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:17:17.069 [2024-05-15 11:10:33.523076] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8728:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:17:17.069 [2024-05-15 11:10:33.523120] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523151] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523176] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:17:17.069 passed 00:17:17.069 Test: claim_v2_existing_writer ...passed 00:17:17.069 Test: claim_v2_existing_v1 ...[2024-05-15 11:10:33.523195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523220] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8748:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8728:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:17:17.069 [2024-05-15 11:10:33.523368] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:17:17.069 [2024-05-15 11:10:33.523398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:17:17.069 passed 00:17:17.069 Test: claim_v1_existing_v2 ...[2024-05-15 11:10:33.523492] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523541] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523685] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:17:17.069 passed 00:17:17.069 Test: examine_claimed ...[2024-05-15 11:10:33.523716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:17:17.069 [2024-05-15 11:10:33.523946] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:17:17.069 passed 00:17:17.069 00:17:17.069 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.069 suites 1 1 n/a 0 0 00:17:17.069 tests 59 59 59 0 0 00:17:17.069 asserts 4599 4599 4599 0 n/a 00:17:17.069 00:17:17.069 Elapsed time = 1.760 seconds 00:17:17.069 11:10:33 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:17:17.069 00:17:17.069 00:17:17.069 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.069 http://cunit.sourceforge.net/ 00:17:17.069 00:17:17.069 00:17:17.069 Suite: nvme 00:17:17.069 Test: test_create_ctrlr ...passed 00:17:17.069 Test: test_reset_ctrlr ...passed 00:17:17.069 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:17:17.069 Test: test_failover_ctrlr ...passed 00:17:17.069 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:17:17.069 Test: test_pending_reset ...passed 00:17:17.069 Test: test_attach_ctrlr ...passed 00:17:17.069 Test: test_aer_cb ...passed 00:17:17.069 Test: test_submit_nvme_cmd ...passed 00:17:17.069 Test: test_add_remove_trid ...[2024-05-15 11:10:33.557591] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.558380] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.558447] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.558511] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.559120] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.559201] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.559572] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:17.069 passed 00:17:17.069 Test: test_abort ...[2024-05-15 11:10:33.560834] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7436:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:17:17.069 passed 00:17:17.069 Test: test_get_io_qpair ...passed 00:17:17.069 Test: test_bdev_unregister ...passed 00:17:17.069 Test: test_compare_ns ...passed 00:17:17.069 Test: test_init_ana_log_page ...passed 00:17:17.069 Test: test_get_memory_domains ...passed 00:17:17.069 Test: test_reconnect_qpair ...passed 00:17:17.069 Test: test_create_bdev_ctrlr ...[2024-05-15 11:10:33.561859] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.562113] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5362:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:17:17.069 passed 00:17:17.069 Test: test_add_multi_ns_to_bdev ...[2024-05-15 11:10:33.562612] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4553:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:17:17.069 passed 00:17:17.069 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:17:17.069 Test: test_admin_path ...passed 00:17:17.069 Test: test_reset_bdev_ctrlr ...passed 00:17:17.069 Test: test_find_io_path ...passed 00:17:17.069 Test: test_retry_io_if_ana_state_is_updating ...passed 00:17:17.069 Test: test_retry_io_for_io_path_error ...passed 00:17:17.069 Test: test_retry_io_count ...passed 00:17:17.069 Test: test_concurrent_read_ana_log_page ...passed 00:17:17.069 Test: test_retry_io_for_ana_error ...passed 00:17:17.069 Test: test_check_io_error_resiliency_params ...[2024-05-15 11:10:33.565525] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6056:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:17:17.069 [2024-05-15 11:10:33.565611] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6060:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:17:17.069 [2024-05-15 11:10:33.565639] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6069:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:17:17.069 [2024-05-15 11:10:33.565683] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6072:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:17:17.069 [2024-05-15 11:10:33.565708] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:17:17.069 [2024-05-15 11:10:33.565752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:17:17.069 [2024-05-15 11:10:33.565780] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6064:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:17:17.069 [2024-05-15 11:10:33.565835] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6079:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:17:17.069 passed 00:17:17.069 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-05-15 11:10:33.565878] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:17:17.069 passed 00:17:17.069 Test: test_reconnect_ctrlr ...[2024-05-15 11:10:33.566228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.566330] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.566453] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 passed 00:17:17.069 Test: test_retry_failover_ctrlr ...[2024-05-15 11:10:33.566516] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.566579] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 passed 00:17:17.069 Test: test_fail_path ...[2024-05-15 11:10:33.566767] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.567068] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.567147] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.069 [2024-05-15 11:10:33.567212] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 [2024-05-15 11:10:33.567262] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 passed 00:17:17.070 Test: test_nvme_ns_cmp ...passed 00:17:17.070 Test: test_ana_transition ...passed 00:17:17.070 Test: test_set_preferred_path ...[2024-05-15 11:10:33.567340] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 passed 00:17:17.070 Test: test_find_next_io_path ...passed 00:17:17.070 Test: test_find_io_path_min_qd ...passed 00:17:17.070 Test: test_disable_auto_failback ...[2024-05-15 11:10:33.568202] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 passed 00:17:17.070 Test: test_set_multipath_policy ...passed 00:17:17.070 Test: test_uuid_generation ...passed 00:17:17.070 Test: test_retry_io_to_same_path ...passed 00:17:17.070 Test: test_race_between_reset_and_disconnected ...passed 00:17:17.070 Test: test_ctrlr_op_rpc ...passed 00:17:17.070 Test: test_bdev_ctrlr_op_rpc ...passed 00:17:17.070 Test: test_disable_enable_ctrlr ...[2024-05-15 11:10:33.570073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 [2024-05-15 11:10:33.570159] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.070 passed 00:17:17.070 Test: test_delete_ctrlr_done ...passed 00:17:17.070 Test: test_ns_remove_during_reset ...passed 00:17:17.070 00:17:17.070 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.070 suites 1 1 n/a 0 0 00:17:17.070 tests 48 48 48 0 0 00:17:17.070 asserts 3565 3565 3565 0 n/a 00:17:17.070 00:17:17.070 Elapsed time = 0.010 seconds 00:17:17.070 11:10:33 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:17:17.070 00:17:17.070 00:17:17.070 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.070 http://cunit.sourceforge.net/ 00:17:17.070 00:17:17.070 Test Options 00:17:17.070 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:17:17.070 00:17:17.070 Suite: raid 00:17:17.070 Test: test_create_raid ...passed 00:17:17.070 Test: test_create_raid_superblock ...passed 00:17:17.070 Test: test_delete_raid ...passed 00:17:17.070 Test: test_create_raid_invalid_args ...[2024-05-15 11:10:33.596412] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:17:17.070 [2024-05-15 11:10:33.596754] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:17:17.070 [2024-05-15 11:10:33.597074] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:17:17.070 [2024-05-15 11:10:33.597233] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:17:17.070 [2024-05-15 11:10:33.597300] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:17:17.070 [2024-05-15 11:10:33.597965] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:17:17.070 [2024-05-15 11:10:33.597997] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:17:17.070 passed 00:17:17.070 Test: test_delete_raid_invalid_args ...passed 00:17:17.070 Test: test_io_channel ...passed 00:17:17.070 Test: test_reset_io ...passed 00:17:17.070 Test: test_write_io ...passed 00:17:17.070 Test: test_read_io ...passed 00:17:17.070 Test: test_unmap_io ...passed 00:17:17.070 Test: test_io_failure ...[2024-05-15 11:10:34.529785] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:17:17.070 passed 00:17:17.070 Test: test_multi_raid_no_io ...passed 00:17:17.070 Test: test_multi_raid_with_io ...passed 00:17:17.070 Test: test_io_type_supported ...passed 00:17:17.070 Test: test_raid_json_dump_info ...passed 00:17:17.070 Test: test_context_size ...passed 00:17:17.070 Test: test_raid_level_conversions ...passed 00:17:17.070 Test: test_raid_io_split ...passed 00:17:17.070 Test: test_raid_process ...passedTest Options 00:17:17.070 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:17:17.070 00:17:17.070 Suite: raid_dif 00:17:17.070 Test: test_create_raid ...passed 00:17:17.070 Test: test_create_raid_superblock ...passed 00:17:17.070 Test: test_delete_raid ...passed 00:17:17.070 Test: test_create_raid_invalid_args ...[2024-05-15 11:10:34.540302] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:17:17.070 [2024-05-15 11:10:34.540427] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:17:17.070 [2024-05-15 11:10:34.540653] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:17:17.070 [2024-05-15 11:10:34.540720] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:17:17.070 [2024-05-15 11:10:34.540738] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:17:17.070 [2024-05-15 11:10:34.541346] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:17:17.070 [2024-05-15 11:10:34.541370] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:17:17.070 passed 00:17:17.070 Test: test_delete_raid_invalid_args ...passed 00:17:17.070 Test: test_io_channel ...passed 00:17:17.070 Test: test_reset_io ...passed 00:17:17.070 Test: test_write_io ...passed 00:17:17.070 Test: test_read_io ...passed 00:17:17.070 Test: test_unmap_io ...passed 00:17:17.070 Test: test_io_failure ...[2024-05-15 11:10:35.503197] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:17:17.070 passed 00:17:17.070 Test: test_multi_raid_no_io ...passed 00:17:17.070 Test: test_multi_raid_with_io ...passed 00:17:17.070 Test: test_io_type_supported ...passed 00:17:17.070 Test: test_raid_json_dump_info ...passed 00:17:17.070 Test: test_context_size ...passed 00:17:17.070 Test: test_raid_level_conversions ...passed 00:17:17.070 Test: test_raid_io_split ...passed 00:17:17.070 Test: test_raid_process ...passed 00:17:17.070 00:17:17.070 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.070 suites 2 2 n/a 0 0 00:17:17.070 tests 38 38 38 0 0 00:17:17.070 asserts 355741 355741 355741 0 n/a 00:17:17.070 00:17:17.070 Elapsed time = 1.920 seconds 00:17:17.070 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:17:17.070 00:17:17.070 00:17:17.070 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.070 http://cunit.sourceforge.net/ 00:17:17.070 00:17:17.070 00:17:17.070 Suite: raid_sb 00:17:17.070 Test: test_raid_bdev_write_superblock ...passed 00:17:17.070 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:17:17.070 Test: test_raid_bdev_parse_superblock ...passed 00:17:17.070 Suite: raid_sb_md 00:17:17.070 Test: test_raid_bdev_write_superblock ...passed 00:17:17.070 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:17:17.070 Test: test_raid_bdev_parse_superblock ...passed 00:17:17.070 Suite: raid_sb_md_interleaved 00:17:17.070 Test: test_raid_bdev_write_superblock ...[2024-05-15 11:10:35.557762] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:17:17.070 [2024-05-15 11:10:35.558475] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:17:17.070 passed 00:17:17.070 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:17:17.070 Test: test_raid_bdev_parse_superblock ...passed[2024-05-15 11:10:35.558889] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:17:17.070 00:17:17.070 00:17:17.070 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.070 suites 3 3 n/a 0 0 00:17:17.070 tests 9 9 9 0 0 00:17:17.070 asserts 139 139 139 0 n/a 00:17:17.070 00:17:17.070 Elapsed time = 0.000 seconds 00:17:17.070 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:17:17.070 00:17:17.070 00:17:17.070 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.070 http://cunit.sourceforge.net/ 00:17:17.070 00:17:17.070 00:17:17.070 Suite: concat 00:17:17.070 Test: test_concat_start ...passed 00:17:17.070 Test: test_concat_rw ...passed 00:17:17.070 Test: test_concat_null_payload ...passed 00:17:17.070 00:17:17.070 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.070 suites 1 1 n/a 0 0 00:17:17.070 tests 3 3 3 0 0 00:17:17.070 asserts 8460 8460 8460 0 n/a 00:17:17.070 00:17:17.070 Elapsed time = 0.000 seconds 00:17:17.070 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:17:17.070 00:17:17.070 00:17:17.070 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.070 http://cunit.sourceforge.net/ 00:17:17.070 00:17:17.070 00:17:17.070 Suite: raid1 00:17:17.070 Test: test_raid1_start ...passed 00:17:17.070 Test: test_raid1_read_balancing ...passed 00:17:17.070 00:17:17.070 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.070 suites 1 1 n/a 0 0 00:17:17.070 tests 2 2 2 0 0 00:17:17.070 asserts 2880 2880 2880 0 n/a 00:17:17.070 00:17:17.070 Elapsed time = 0.000 seconds 00:17:17.070 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:17:17.070 00:17:17.070 00:17:17.070 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.070 http://cunit.sourceforge.net/ 00:17:17.070 00:17:17.070 00:17:17.070 Suite: zone 00:17:17.070 Test: test_zone_get_operation ...passed 00:17:17.070 Test: test_bdev_zone_get_info ...passed 00:17:17.070 Test: test_bdev_zone_management ...passed 00:17:17.070 Test: test_bdev_zone_append ...passed 00:17:17.070 Test: test_bdev_zone_append_with_md ...passed 00:17:17.071 Test: test_bdev_zone_appendv ...passed 00:17:17.071 Test: test_bdev_zone_appendv_with_md ...passed 00:17:17.071 Test: test_bdev_io_get_append_location ...passed 00:17:17.071 00:17:17.071 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.071 suites 1 1 n/a 0 0 00:17:17.071 tests 8 8 8 0 0 00:17:17.071 asserts 94 94 94 0 n/a 00:17:17.071 00:17:17.071 Elapsed time = 0.000 seconds 00:17:17.071 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:17:17.071 00:17:17.071 00:17:17.071 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.071 http://cunit.sourceforge.net/ 00:17:17.071 00:17:17.071 00:17:17.071 Suite: gpt_parse 00:17:17.071 Test: test_parse_mbr_and_primary ...[2024-05-15 11:10:35.648516] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:17:17.071 [2024-05-15 11:10:35.648740] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:17:17.071 [2024-05-15 11:10:35.648782] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:17:17.071 [2024-05-15 11:10:35.648855] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:17:17.071 [2024-05-15 11:10:35.648888] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:17:17.071 [2024-05-15 11:10:35.648938] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:17:17.071 passed 00:17:17.071 Test: test_parse_secondary ...[2024-05-15 11:10:35.649195] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:17:17.071 [2024-05-15 11:10:35.649232] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:17:17.071 [2024-05-15 11:10:35.649256] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:17:17.071 [2024-05-15 11:10:35.649279] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:17:17.071 passed 00:17:17.071 Test: test_check_mbr ...[2024-05-15 11:10:35.649539] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:17:17.071 passed 00:17:17.071 Test: test_read_header ...[2024-05-15 11:10:35.649572] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:17:17.071 [2024-05-15 11:10:35.649602] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:17:17.071 [2024-05-15 11:10:35.649669] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:17:17.071 [2024-05-15 11:10:35.649733] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:17:17.071 [2024-05-15 11:10:35.649772] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:17:17.071 [2024-05-15 11:10:35.649797] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:17:17.071 passed 00:17:17.071 Test: test_read_partitions ...[2024-05-15 11:10:35.649838] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:17:17.071 [2024-05-15 11:10:35.649875] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:17:17.071 [2024-05-15 11:10:35.649919] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:17:17.071 [2024-05-15 11:10:35.649944] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:17:17.071 [2024-05-15 11:10:35.649962] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:17:17.071 [2024-05-15 11:10:35.650094] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:17:17.071 passed 00:17:17.071 00:17:17.071 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.071 suites 1 1 n/a 0 0 00:17:17.071 tests 5 5 5 0 0 00:17:17.071 asserts 33 33 33 0 n/a 00:17:17.071 00:17:17.071 Elapsed time = 0.000 seconds 00:17:17.071 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:17:17.071 00:17:17.071 00:17:17.071 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.071 http://cunit.sourceforge.net/ 00:17:17.071 00:17:17.071 00:17:17.071 Suite: bdev_part 00:17:17.071 Test: part_test ...passed 00:17:17.071 Test: part_free_test ...passed 00:17:17.071 Test: part_get_io_channel_test ...[2024-05-15 11:10:35.670294] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:17:17.339 passed 00:17:17.339 Test: part_construct_ext ...passed 00:17:17.339 00:17:17.339 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.339 suites 1 1 n/a 0 0 00:17:17.339 tests 4 4 4 0 0 00:17:17.339 asserts 48 48 48 0 n/a 00:17:17.339 00:17:17.339 Elapsed time = 0.040 seconds 00:17:17.339 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:17:17.339 00:17:17.339 00:17:17.339 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.339 http://cunit.sourceforge.net/ 00:17:17.339 00:17:17.339 00:17:17.339 Suite: scsi_nvme_suite 00:17:17.339 Test: scsi_nvme_translate_test ...passed 00:17:17.339 00:17:17.339 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.339 suites 1 1 n/a 0 0 00:17:17.339 tests 1 1 1 0 0 00:17:17.339 asserts 104 104 104 0 n/a 00:17:17.339 00:17:17.339 Elapsed time = 0.000 seconds 00:17:17.339 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:17:17.339 00:17:17.339 00:17:17.339 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.339 http://cunit.sourceforge.net/ 00:17:17.339 00:17:17.339 00:17:17.339 Suite: lvol 00:17:17.339 Test: ut_lvs_init ...[2024-05-15 11:10:35.761304] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:17:17.339 passed 00:17:17.339 Test: ut_lvol_init ...passed 00:17:17.339 Test: ut_lvol_snapshot ...passed 00:17:17.339 Test: ut_lvol_clone ...passed 00:17:17.339 Test: ut_lvs_destroy ...passed 00:17:17.339 Test: ut_lvs_unload ...passed 00:17:17.339 Test: ut_lvol_resize ...passed 00:17:17.339 Test: ut_lvol_set_read_only ...passed 00:17:17.339 Test: ut_lvol_hotremove ...passed 00:17:17.339 Test: ut_vbdev_lvol_get_io_channel ...passed 00:17:17.339 Test: ut_vbdev_lvol_io_type_supported ...passed 00:17:17.339 Test: ut_lvol_read_write ...[2024-05-15 11:10:35.762112] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:17:17.339 [2024-05-15 11:10:35.763308] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:17:17.339 passed 00:17:17.339 Test: ut_vbdev_lvol_submit_request ...passed 00:17:17.339 Test: ut_lvol_examine_config ...passed 00:17:17.339 Test: ut_lvol_examine_disk ...[2024-05-15 11:10:35.764119] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:17:17.339 passed 00:17:17.339 Test: ut_lvol_rename ...passed 00:17:17.339 Test: ut_bdev_finish ...passed 00:17:17.339 Test: ut_lvs_rename ...passed 00:17:17.339 Test: ut_lvol_seek ...passed 00:17:17.339 Test: ut_esnap_dev_create ...passed 00:17:17.339 Test: ut_lvol_esnap_clone_bad_args ...passed 00:17:17.339 Test: ut_lvol_shallow_copy ...passed 00:17:17.339 00:17:17.339 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.339 suites 1 1 n/a 0 0 00:17:17.339 tests 22 22 22 0 0 00:17:17.339 asserts 793 793 793 0 n/a 00:17:17.339 00:17:17.339 Elapsed time = 0.010 seconds 00:17:17.339 [2024-05-15 11:10:35.764859] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:17:17.339 [2024-05-15 11:10:35.765009] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:17:17.339 [2024-05-15 11:10:35.765551] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:17:17.339 [2024-05-15 11:10:35.765660] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:17:17.339 [2024-05-15 11:10:35.765704] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:17:17.339 [2024-05-15 11:10:35.765787] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1911:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:17:17.339 [2024-05-15 11:10:35.766007] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:17:17.339 [2024-05-15 11:10:35.766071] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:17:17.339 [2024-05-15 11:10:35.766404] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:17:17.339 [2024-05-15 11:10:35.766491] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:17:17.339 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:17:17.339 00:17:17.339 00:17:17.339 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.339 http://cunit.sourceforge.net/ 00:17:17.339 00:17:17.339 00:17:17.339 Suite: zone_block 00:17:17.339 Test: test_zone_block_create ...passed 00:17:17.339 Test: test_zone_block_create_invalid ...passed 00:17:17.339 Test: test_get_zone_info ...passed 00:17:17.339 Test: test_supported_io_types ...passed 00:17:17.339 Test: test_reset_zone ...passed 00:17:17.339 Test: test_open_zone ...[2024-05-15 11:10:35.813311] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:17:17.339 [2024-05-15 11:10:35.813559] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 11:10:35.813672] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:17:17.339 [2024-05-15 11:10:35.813721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 11:10:35.813768] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:17:17.339 [2024-05-15 11:10:35.813822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 11:10:35.813855] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:17:17.339 [2024-05-15 11:10:35.813919] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 11:10:35.814190] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.814243] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.814289] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.814614] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.814650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.814878] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 passed 00:17:17.339 Test: test_zone_write ...[2024-05-15 11:10:35.815371] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.815420] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.815658] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:17:17.339 [2024-05-15 11:10:35.815696] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.815754] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:17:17.339 [2024-05-15 11:10:35.815799] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.821126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:17:17.339 [2024-05-15 11:10:35.821163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.821224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:17:17.339 [2024-05-15 11:10:35.821255] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 passed 00:17:17.339 Test: test_zone_read ...passed 00:17:17.339 Test: test_close_zone ...[2024-05-15 11:10:35.827107] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:17:17.339 [2024-05-15 11:10:35.827176] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.827424] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:17:17.339 [2024-05-15 11:10:35.827463] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.827511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:17:17.339 [2024-05-15 11:10:35.827542] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.827801] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:17:17.339 [2024-05-15 11:10:35.827852] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828046] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828095] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 passed 00:17:17.339 Test: test_finish_zone ...passed 00:17:17.339 Test: test_append_zone ...[2024-05-15 11:10:35.828188] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828229] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828487] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828524] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828725] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:17:17.339 [2024-05-15 11:10:35.828760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 [2024-05-15 11:10:35.828802] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:17:17.339 [2024-05-15 11:10:35.828841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 passed 00:17:17.339 00:17:17.339 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.339 suites 1 1 n/a 0 0 00:17:17.339 tests 11 11 11 0 0 00:17:17.339 asserts 3437 3437 3437 0 n/a 00:17:17.339 00:17:17.339 Elapsed time = 0.030 seconds 00:17:17.339 [2024-05-15 11:10:35.840216] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:17:17.339 [2024-05-15 11:10:35.840277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:17:17.339 11:10:35 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:17:17.339 00:17:17.339 00:17:17.339 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.339 http://cunit.sourceforge.net/ 00:17:17.339 00:17:17.339 00:17:17.339 Suite: bdev 00:17:17.339 Test: basic ...[2024-05-15 11:10:35.925989] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e201): Operation not permitted (rc=-1) 00:17:17.339 [2024-05-15 11:10:35.926239] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x51e1c0): Operation not permitted (rc=-1) 00:17:17.339 [2024-05-15 11:10:35.926283] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e201): Operation not permitted (rc=-1) 00:17:17.339 passed 00:17:17.604 Test: unregister_and_close ...passed 00:17:17.604 Test: unregister_and_close_different_threads ...passed 00:17:17.604 Test: basic_qos ...passed 00:17:17.604 Test: put_channel_during_reset ...passed 00:17:17.604 Test: aborted_reset ...passed 00:17:17.863 Test: aborted_reset_no_outstanding_io ...passed 00:17:17.863 Test: io_during_reset ...passed 00:17:17.863 Test: reset_completions ...passed 00:17:17.863 Test: io_during_qos_queue ...passed 00:17:17.863 Test: io_during_qos_reset ...passed 00:17:18.122 Test: enomem ...passed 00:17:18.122 Test: enomem_multi_bdev ...passed 00:17:18.122 Test: enomem_multi_bdev_unregister ...passed 00:17:18.122 Test: enomem_multi_io_target ...passed 00:17:18.122 Test: qos_dynamic_enable ...passed 00:17:18.380 Test: bdev_histograms_mt ...passed 00:17:18.380 Test: bdev_set_io_timeout_mt ...passed 00:17:18.380 Test: lock_lba_range_then_submit_io ...[2024-05-15 11:10:36.849931] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:17:18.380 [2024-05-15 11:10:36.873348] thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x51e180 already registered (old:0x6130000003c0 new:0x613000000c80) 00:17:18.380 passed 00:17:18.380 Test: unregister_during_reset ...passed 00:17:18.380 Test: event_notify_and_close ...passed 00:17:18.380 Suite: bdev_wrong_thread 00:17:18.380 Test: spdk_bdev_register_wt ...passed 00:17:18.380 Test: spdk_bdev_examine_wt ...passed[2024-05-15 11:10:36.992307] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:17:18.380 [2024-05-15 11:10:36.992581] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:17:18.380 00:17:18.380 00:17:18.380 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.380 suites 2 2 n/a 0 0 00:17:18.380 tests 23 23 23 0 0 00:17:18.380 asserts 601 601 601 0 n/a 00:17:18.380 00:17:18.380 Elapsed time = 1.090 seconds 00:17:18.380 ************************************ 00:17:18.380 END TEST unittest_bdev 00:17:18.380 ************************************ 00:17:18.380 00:17:18.380 real 0m5.270s 00:17:18.380 user 0m2.142s 00:17:18.380 sys 0m3.123s 00:17:18.380 11:10:37 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.380 11:10:37 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:17:18.638 11:10:37 unittest -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:18.639 11:10:37 unittest -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:18.639 11:10:37 unittest -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:18.639 11:10:37 unittest -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:18.639 11:10:37 unittest -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:17:18.639 11:10:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:18.639 11:10:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.639 11:10:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:18.639 ************************************ 00:17:18.639 START TEST unittest_blob_blobfs 00:17:18.639 ************************************ 00:17:18.639 11:10:37 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:17:18.639 11:10:37 unittest.unittest_blob_blobfs -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:17:18.639 11:10:37 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:17:18.639 00:17:18.639 00:17:18.639 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.639 http://cunit.sourceforge.net/ 00:17:18.639 00:17:18.639 00:17:18.639 Suite: blob_nocopy_noextent 00:17:18.639 Test: blob_init ...[2024-05-15 11:10:37.089589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:17:18.639 passed 00:17:18.639 Test: blob_thin_provision ...passed 00:17:18.639 Test: blob_read_only ...passed 00:17:18.639 Test: bs_load ...[2024-05-15 11:10:37.148983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:17:18.639 passed 00:17:18.639 Test: bs_load_custom_cluster_size ...passed 00:17:18.639 Test: bs_load_after_failed_grow ...passed 00:17:18.639 Test: bs_cluster_sz ...[2024-05-15 11:10:37.177573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:17:18.639 [2024-05-15 11:10:37.178167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:17:18.639 [2024-05-15 11:10:37.178326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:17:18.639 passed 00:17:18.639 Test: bs_resize_md ...passed 00:17:18.639 Test: bs_destroy ...passed 00:17:18.639 Test: bs_type ...passed 00:17:18.639 Test: bs_super_block ...passed 00:17:18.639 Test: bs_test_recover_cluster_count ...passed 00:17:18.639 Test: bs_grow_live ...passed 00:17:18.639 Test: bs_grow_live_no_space ...passed 00:17:18.639 Test: bs_test_grow ...passed 00:17:18.897 Test: blob_serialize_test ...passed 00:17:18.897 Test: super_block_crc ...passed 00:17:18.897 Test: blob_thin_prov_write_count_io ...passed 00:17:18.897 Test: blob_thin_prov_unmap_cluster ...passed 00:17:18.897 Test: bs_load_iter_test ...passed 00:17:18.897 Test: blob_relations ...[2024-05-15 11:10:37.374823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.374949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 [2024-05-15 11:10:37.376115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.376247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 passed 00:17:18.897 Test: blob_relations2 ...[2024-05-15 11:10:37.394490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.394599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 [2024-05-15 11:10:37.394663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.394705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 [2024-05-15 11:10:37.396266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.396344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 [2024-05-15 11:10:37.396732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:18.897 [2024-05-15 11:10:37.396782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:18.897 passed 00:17:18.897 Test: blob_relations3 ...passed 00:17:19.157 Test: blobstore_clean_power_failure ...passed 00:17:19.157 Test: blob_delete_snapshot_power_failure ...[2024-05-15 11:10:37.553919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:19.157 [2024-05-15 11:10:37.565760] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:19.157 [2024-05-15 11:10:37.566124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:19.157 [2024-05-15 11:10:37.566188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:19.157 [2024-05-15 11:10:37.579508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:19.157 [2024-05-15 11:10:37.579628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:19.157 [2024-05-15 11:10:37.579689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:19.157 [2024-05-15 11:10:37.579772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:19.157 [2024-05-15 11:10:37.596415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:17:19.157 [2024-05-15 11:10:37.596564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:19.157 [2024-05-15 11:10:37.609611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:17:19.157 [2024-05-15 11:10:37.609750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:19.157 [2024-05-15 11:10:37.623390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:17:19.157 [2024-05-15 11:10:37.623513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:19.157 passed 00:17:19.157 Test: blob_create_snapshot_power_failure ...[2024-05-15 11:10:37.660154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:19.157 [2024-05-15 11:10:37.681931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:19.157 [2024-05-15 11:10:37.693399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:17:19.157 passed 00:17:19.157 Test: blob_io_unit ...passed 00:17:19.157 Test: blob_io_unit_compatibility ...passed 00:17:19.157 Test: blob_ext_md_pages ...passed 00:17:19.415 Test: blob_esnap_io_4096_4096 ...passed 00:17:19.415 Test: blob_esnap_io_512_512 ...passed 00:17:19.415 Test: blob_esnap_io_4096_512 ...passed 00:17:19.415 Test: blob_esnap_io_512_4096 ...passed 00:17:19.415 Test: blob_esnap_clone_resize ...passed 00:17:19.416 Suite: blob_bs_nocopy_noextent 00:17:19.416 Test: blob_open ...passed 00:17:19.416 Test: blob_create ...[2024-05-15 11:10:37.949947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:17:19.416 passed 00:17:19.416 Test: blob_create_loop ...passed 00:17:19.416 Test: blob_create_fail ...[2024-05-15 11:10:38.046830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:19.674 passed 00:17:19.674 Test: blob_create_internal ...passed 00:17:19.674 Test: blob_create_zero_extent ...passed 00:17:19.674 Test: blob_snapshot ...passed 00:17:19.674 Test: blob_clone ...passed 00:17:19.674 Test: blob_inflate ...[2024-05-15 11:10:38.232507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:17:19.674 passed 00:17:19.674 Test: blob_delete ...passed 00:17:19.674 Test: blob_resize_test ...[2024-05-15 11:10:38.305950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:17:19.932 passed 00:17:19.932 Test: blob_resize_thin_test ...passed 00:17:19.932 Test: channel_ops ...passed 00:17:19.932 Test: blob_super ...passed 00:17:19.932 Test: blob_rw_verify_iov ...passed 00:17:19.932 Test: blob_unmap ...passed 00:17:19.932 Test: blob_iter ...passed 00:17:19.932 Test: blob_parse_md ...passed 00:17:20.190 Test: bs_load_pending_removal ...passed 00:17:20.190 Test: bs_unload ...[2024-05-15 11:10:38.611100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:17:20.190 passed 00:17:20.190 Test: bs_usable_clusters ...passed 00:17:20.190 Test: blob_crc ...[2024-05-15 11:10:38.677259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:20.190 [2024-05-15 11:10:38.677401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:20.190 passed 00:17:20.190 Test: blob_flags ...passed 00:17:20.190 Test: bs_version ...passed 00:17:20.190 Test: blob_set_xattrs_test ...[2024-05-15 11:10:38.777953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:20.190 [2024-05-15 11:10:38.778068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:20.190 passed 00:17:20.447 Test: blob_thin_prov_alloc ...passed 00:17:20.447 Test: blob_insert_cluster_msg_test ...passed 00:17:20.447 Test: blob_thin_prov_rw ...passed 00:17:20.447 Test: blob_thin_prov_rle ...passed 00:17:20.448 Test: blob_thin_prov_rw_iov ...passed 00:17:20.448 Test: blob_snapshot_rw ...passed 00:17:20.448 Test: blob_snapshot_rw_iov ...passed 00:17:20.756 Test: blob_inflate_rw ...passed 00:17:20.756 Test: blob_snapshot_freeze_io ...passed 00:17:21.015 Test: blob_operation_split_rw ...passed 00:17:21.015 Test: blob_operation_split_rw_iov ...passed 00:17:21.015 Test: blob_simultaneous_operations ...[2024-05-15 11:10:39.522414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:21.015 [2024-05-15 11:10:39.522533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:21.015 [2024-05-15 11:10:39.524518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:21.015 [2024-05-15 11:10:39.524657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:21.015 [2024-05-15 11:10:39.543515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:21.015 [2024-05-15 11:10:39.543617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:21.015 [2024-05-15 11:10:39.543746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:21.015 [2024-05-15 11:10:39.543788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:21.015 passed 00:17:21.015 Test: blob_persist_test ...passed 00:17:21.274 Test: blob_decouple_snapshot ...passed 00:17:21.274 Test: blob_seek_io_unit ...passed 00:17:21.274 Test: blob_nested_freezes ...passed 00:17:21.274 Test: blob_clone_resize ...passed 00:17:21.274 Test: blob_shallow_copy ...[2024-05-15 11:10:39.864091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:17:21.274 [2024-05-15 11:10:39.864875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:17:21.274 [2024-05-15 11:10:39.865236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:17:21.274 passed 00:17:21.274 Suite: blob_blob_nocopy_noextent 00:17:21.533 Test: blob_write ...passed 00:17:21.533 Test: blob_read ...passed 00:17:21.533 Test: blob_rw_verify ...passed 00:17:21.533 Test: blob_rw_verify_iov_nomem ...passed 00:17:21.533 Test: blob_rw_iov_read_only ...passed 00:17:21.533 Test: blob_xattr ...passed 00:17:21.533 Test: blob_dirty_shutdown ...passed 00:17:21.791 Test: blob_is_degraded ...passed 00:17:21.791 Suite: blob_esnap_bs_nocopy_noextent 00:17:21.791 Test: blob_esnap_create ...passed 00:17:21.792 Test: blob_esnap_thread_add_remove ...passed 00:17:21.792 Test: blob_esnap_clone_snapshot ...passed 00:17:21.792 Test: blob_esnap_clone_inflate ...passed 00:17:21.792 Test: blob_esnap_clone_decouple ...passed 00:17:21.792 Test: blob_esnap_clone_reload ...passed 00:17:21.792 Test: blob_esnap_hotplug ...passed 00:17:21.792 Suite: blob_nocopy_extent 00:17:21.792 Test: blob_init ...[2024-05-15 11:10:40.425752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:17:22.050 passed 00:17:22.050 Test: blob_thin_provision ...passed 00:17:22.050 Test: blob_read_only ...passed 00:17:22.050 Test: bs_load ...[2024-05-15 11:10:40.475200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:17:22.050 passed 00:17:22.050 Test: bs_load_custom_cluster_size ...passed 00:17:22.050 Test: bs_load_after_failed_grow ...passed 00:17:22.050 Test: bs_cluster_sz ...[2024-05-15 11:10:40.503308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:17:22.050 [2024-05-15 11:10:40.503653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:17:22.050 [2024-05-15 11:10:40.503730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:17:22.050 passed 00:17:22.050 Test: bs_resize_md ...passed 00:17:22.050 Test: bs_destroy ...passed 00:17:22.050 Test: bs_type ...passed 00:17:22.050 Test: bs_super_block ...passed 00:17:22.050 Test: bs_test_recover_cluster_count ...passed 00:17:22.050 Test: bs_grow_live ...passed 00:17:22.050 Test: bs_grow_live_no_space ...passed 00:17:22.050 Test: bs_test_grow ...passed 00:17:22.050 Test: blob_serialize_test ...passed 00:17:22.050 Test: super_block_crc ...passed 00:17:22.050 Test: blob_thin_prov_write_count_io ...passed 00:17:22.050 Test: blob_thin_prov_unmap_cluster ...passed 00:17:22.050 Test: bs_load_iter_test ...passed 00:17:22.309 Test: blob_relations ...[2024-05-15 11:10:40.693225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.693369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.694554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.694630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 passed 00:17:22.309 Test: blob_relations2 ...[2024-05-15 11:10:40.709473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.709593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.709698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.709739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.712925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.713059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.714452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:22.309 [2024-05-15 11:10:40.714588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 passed 00:17:22.309 Test: blob_relations3 ...passed 00:17:22.309 Test: blobstore_clean_power_failure ...passed 00:17:22.309 Test: blob_delete_snapshot_power_failure ...[2024-05-15 11:10:40.880927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:22.309 [2024-05-15 11:10:40.895580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:22.309 [2024-05-15 11:10:40.909089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:22.309 [2024-05-15 11:10:40.909225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:22.309 [2024-05-15 11:10:40.909288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.923209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:22.309 [2024-05-15 11:10:40.923330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:22.309 [2024-05-15 11:10:40.923375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:22.309 [2024-05-15 11:10:40.923429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.309 [2024-05-15 11:10:40.941144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:22.309 [2024-05-15 11:10:40.941264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:22.309 [2024-05-15 11:10:40.941311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:22.309 [2024-05-15 11:10:40.941350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.567 [2024-05-15 11:10:40.955959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:17:22.568 [2024-05-15 11:10:40.956111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.568 [2024-05-15 11:10:40.970895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:17:22.568 [2024-05-15 11:10:40.971044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.568 [2024-05-15 11:10:40.985908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:17:22.568 [2024-05-15 11:10:40.986029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:22.568 passed 00:17:22.568 Test: blob_create_snapshot_power_failure ...[2024-05-15 11:10:41.029292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:22.568 [2024-05-15 11:10:41.042887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:22.568 [2024-05-15 11:10:41.068633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:22.568 [2024-05-15 11:10:41.081419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:17:22.568 passed 00:17:22.568 Test: blob_io_unit ...passed 00:17:22.568 Test: blob_io_unit_compatibility ...passed 00:17:22.568 Test: blob_ext_md_pages ...passed 00:17:22.568 Test: blob_esnap_io_4096_4096 ...passed 00:17:22.826 Test: blob_esnap_io_512_512 ...passed 00:17:22.826 Test: blob_esnap_io_4096_512 ...passed 00:17:22.826 Test: blob_esnap_io_512_4096 ...passed 00:17:22.826 Test: blob_esnap_clone_resize ...passed 00:17:22.826 Suite: blob_bs_nocopy_extent 00:17:22.826 Test: blob_open ...passed 00:17:22.826 Test: blob_create ...[2024-05-15 11:10:41.365071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:17:22.826 passed 00:17:23.084 Test: blob_create_loop ...passed 00:17:23.084 Test: blob_create_fail ...[2024-05-15 11:10:41.485533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:23.084 passed 00:17:23.084 Test: blob_create_internal ...passed 00:17:23.084 Test: blob_create_zero_extent ...passed 00:17:23.084 Test: blob_snapshot ...passed 00:17:23.084 Test: blob_clone ...passed 00:17:23.084 Test: blob_inflate ...[2024-05-15 11:10:41.684069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:17:23.084 passed 00:17:23.342 Test: blob_delete ...passed 00:17:23.342 Test: blob_resize_test ...[2024-05-15 11:10:41.753604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:17:23.343 passed 00:17:23.343 Test: blob_resize_thin_test ...passed 00:17:23.343 Test: channel_ops ...passed 00:17:23.343 Test: blob_super ...passed 00:17:23.343 Test: blob_rw_verify_iov ...passed 00:17:23.343 Test: blob_unmap ...passed 00:17:23.600 Test: blob_iter ...passed 00:17:23.600 Test: blob_parse_md ...passed 00:17:23.600 Test: bs_load_pending_removal ...passed 00:17:23.600 Test: bs_unload ...[2024-05-15 11:10:42.067030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:17:23.600 passed 00:17:23.600 Test: bs_usable_clusters ...passed 00:17:23.600 Test: blob_crc ...[2024-05-15 11:10:42.132835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:23.600 [2024-05-15 11:10:42.132947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:23.600 passed 00:17:23.600 Test: blob_flags ...passed 00:17:23.600 Test: bs_version ...passed 00:17:23.601 Test: blob_set_xattrs_test ...[2024-05-15 11:10:42.230550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:23.601 [2024-05-15 11:10:42.230725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:23.858 passed 00:17:23.858 Test: blob_thin_prov_alloc ...passed 00:17:23.858 Test: blob_insert_cluster_msg_test ...passed 00:17:23.858 Test: blob_thin_prov_rw ...passed 00:17:23.858 Test: blob_thin_prov_rle ...passed 00:17:23.858 Test: blob_thin_prov_rw_iov ...passed 00:17:23.858 Test: blob_snapshot_rw ...passed 00:17:24.117 Test: blob_snapshot_rw_iov ...passed 00:17:24.117 Test: blob_inflate_rw ...passed 00:17:24.117 Test: blob_snapshot_freeze_io ...passed 00:17:24.374 Test: blob_operation_split_rw ...passed 00:17:24.374 Test: blob_operation_split_rw_iov ...passed 00:17:24.642 Test: blob_simultaneous_operations ...[2024-05-15 11:10:43.016889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:24.642 [2024-05-15 11:10:43.016996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:24.642 [2024-05-15 11:10:43.018448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:24.643 [2024-05-15 11:10:43.018518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:24.643 [2024-05-15 11:10:43.038123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:24.643 [2024-05-15 11:10:43.038220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:24.643 [2024-05-15 11:10:43.038370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:24.643 [2024-05-15 11:10:43.038403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:24.643 passed 00:17:24.643 Test: blob_persist_test ...passed 00:17:24.643 Test: blob_decouple_snapshot ...passed 00:17:24.643 Test: blob_seek_io_unit ...passed 00:17:24.643 Test: blob_nested_freezes ...passed 00:17:24.909 Test: blob_clone_resize ...passed 00:17:24.909 Test: blob_shallow_copy ...[2024-05-15 11:10:43.325322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:17:24.909 [2024-05-15 11:10:43.326337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:17:24.909 [2024-05-15 11:10:43.326725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:17:24.909 passed 00:17:24.909 Suite: blob_blob_nocopy_extent 00:17:24.909 Test: blob_write ...passed 00:17:24.909 Test: blob_read ...passed 00:17:24.909 Test: blob_rw_verify ...passed 00:17:24.909 Test: blob_rw_verify_iov_nomem ...passed 00:17:24.909 Test: blob_rw_iov_read_only ...passed 00:17:25.179 Test: blob_xattr ...passed 00:17:25.179 Test: blob_dirty_shutdown ...passed 00:17:25.179 Test: blob_is_degraded ...passed 00:17:25.179 Suite: blob_esnap_bs_nocopy_extent 00:17:25.179 Test: blob_esnap_create ...passed 00:17:25.179 Test: blob_esnap_thread_add_remove ...passed 00:17:25.179 Test: blob_esnap_clone_snapshot ...passed 00:17:25.179 Test: blob_esnap_clone_inflate ...passed 00:17:25.179 Test: blob_esnap_clone_decouple ...passed 00:17:25.179 Test: blob_esnap_clone_reload ...passed 00:17:25.437 Test: blob_esnap_hotplug ...passed 00:17:25.437 Suite: blob_copy_noextent 00:17:25.437 Test: blob_init ...[2024-05-15 11:10:43.843052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:17:25.437 passed 00:17:25.437 Test: blob_thin_provision ...passed 00:17:25.437 Test: blob_read_only ...passed 00:17:25.437 Test: bs_load ...[2024-05-15 11:10:43.884819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:17:25.437 passed 00:17:25.437 Test: bs_load_custom_cluster_size ...passed 00:17:25.437 Test: bs_load_after_failed_grow ...passed 00:17:25.437 Test: bs_cluster_sz ...[2024-05-15 11:10:43.906640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:17:25.437 [2024-05-15 11:10:43.906806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:17:25.437 [2024-05-15 11:10:43.906888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:17:25.437 passed 00:17:25.437 Test: bs_resize_md ...passed 00:17:25.437 Test: bs_destroy ...passed 00:17:25.437 Test: bs_type ...passed 00:17:25.437 Test: bs_super_block ...passed 00:17:25.437 Test: bs_test_recover_cluster_count ...passed 00:17:25.437 Test: bs_grow_live ...passed 00:17:25.437 Test: bs_grow_live_no_space ...passed 00:17:25.437 Test: bs_test_grow ...passed 00:17:25.437 Test: blob_serialize_test ...passed 00:17:25.437 Test: super_block_crc ...passed 00:17:25.437 Test: blob_thin_prov_write_count_io ...passed 00:17:25.437 Test: blob_thin_prov_unmap_cluster ...passed 00:17:25.696 Test: bs_load_iter_test ...passed 00:17:25.696 Test: blob_relations ...[2024-05-15 11:10:44.085832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.085948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.086373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.086401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 passed 00:17:25.696 Test: blob_relations2 ...[2024-05-15 11:10:44.098829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.098911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.098950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.098979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.099655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.099686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.100328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:25.696 [2024-05-15 11:10:44.100416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 passed 00:17:25.696 Test: blob_relations3 ...passed 00:17:25.696 Test: blobstore_clean_power_failure ...passed 00:17:25.696 Test: blob_delete_snapshot_power_failure ...[2024-05-15 11:10:44.266324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:25.696 [2024-05-15 11:10:44.278903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:25.696 [2024-05-15 11:10:44.279009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:25.696 [2024-05-15 11:10:44.279045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.291707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:25.696 [2024-05-15 11:10:44.291798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:25.696 [2024-05-15 11:10:44.292563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:25.696 [2024-05-15 11:10:44.292638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.308780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:17:25.696 [2024-05-15 11:10:44.308921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.696 [2024-05-15 11:10:44.321930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:17:25.696 [2024-05-15 11:10:44.322074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.954 [2024-05-15 11:10:44.337007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:17:25.954 [2024-05-15 11:10:44.337112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:25.954 passed 00:17:25.954 Test: blob_create_snapshot_power_failure ...[2024-05-15 11:10:44.375249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:25.954 [2024-05-15 11:10:44.401190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:17:25.954 [2024-05-15 11:10:44.413339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:17:25.954 passed 00:17:25.954 Test: blob_io_unit ...passed 00:17:25.954 Test: blob_io_unit_compatibility ...passed 00:17:25.954 Test: blob_ext_md_pages ...passed 00:17:25.954 Test: blob_esnap_io_4096_4096 ...passed 00:17:25.954 Test: blob_esnap_io_512_512 ...passed 00:17:25.954 Test: blob_esnap_io_4096_512 ...passed 00:17:26.212 Test: blob_esnap_io_512_4096 ...passed 00:17:26.212 Test: blob_esnap_clone_resize ...passed 00:17:26.212 Suite: blob_bs_copy_noextent 00:17:26.212 Test: blob_open ...passed 00:17:26.212 Test: blob_create ...[2024-05-15 11:10:44.686427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:17:26.212 passed 00:17:26.212 Test: blob_create_loop ...passed 00:17:26.212 Test: blob_create_fail ...[2024-05-15 11:10:44.790990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:26.212 passed 00:17:26.212 Test: blob_create_internal ...passed 00:17:26.470 Test: blob_create_zero_extent ...passed 00:17:26.470 Test: blob_snapshot ...passed 00:17:26.470 Test: blob_clone ...passed 00:17:26.470 Test: blob_inflate ...[2024-05-15 11:10:44.962408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:17:26.470 passed 00:17:26.470 Test: blob_delete ...passed 00:17:26.470 Test: blob_resize_test ...[2024-05-15 11:10:45.024967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:17:26.470 passed 00:17:26.470 Test: blob_resize_thin_test ...passed 00:17:26.470 Test: channel_ops ...passed 00:17:26.727 Test: blob_super ...passed 00:17:26.727 Test: blob_rw_verify_iov ...passed 00:17:26.727 Test: blob_unmap ...passed 00:17:26.727 Test: blob_iter ...passed 00:17:26.727 Test: blob_parse_md ...passed 00:17:26.727 Test: bs_load_pending_removal ...passed 00:17:26.727 Test: bs_unload ...[2024-05-15 11:10:45.317280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:17:26.727 passed 00:17:26.986 Test: bs_usable_clusters ...passed 00:17:26.986 Test: blob_crc ...[2024-05-15 11:10:45.388572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:26.986 [2024-05-15 11:10:45.388741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:26.986 passed 00:17:26.986 Test: blob_flags ...passed 00:17:26.986 Test: bs_version ...passed 00:17:26.986 Test: blob_set_xattrs_test ...[2024-05-15 11:10:45.491347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:26.986 [2024-05-15 11:10:45.491467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:26.986 passed 00:17:26.986 Test: blob_thin_prov_alloc ...passed 00:17:26.986 Test: blob_insert_cluster_msg_test ...passed 00:17:27.244 Test: blob_thin_prov_rw ...passed 00:17:27.244 Test: blob_thin_prov_rle ...passed 00:17:27.244 Test: blob_thin_prov_rw_iov ...passed 00:17:27.244 Test: blob_snapshot_rw ...passed 00:17:27.244 Test: blob_snapshot_rw_iov ...passed 00:17:27.502 Test: blob_inflate_rw ...passed 00:17:27.502 Test: blob_snapshot_freeze_io ...passed 00:17:27.760 Test: blob_operation_split_rw ...passed 00:17:27.760 Test: blob_operation_split_rw_iov ...passed 00:17:27.760 Test: blob_simultaneous_operations ...[2024-05-15 11:10:46.305486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:27.760 [2024-05-15 11:10:46.305569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:27.760 [2024-05-15 11:10:46.306169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:27.760 [2024-05-15 11:10:46.306252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:27.760 [2024-05-15 11:10:46.310186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:27.760 [2024-05-15 11:10:46.310258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:27.760 [2024-05-15 11:10:46.310406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:27.760 [2024-05-15 11:10:46.310444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:27.760 passed 00:17:27.760 Test: blob_persist_test ...passed 00:17:28.018 Test: blob_decouple_snapshot ...passed 00:17:28.018 Test: blob_seek_io_unit ...passed 00:17:28.018 Test: blob_nested_freezes ...passed 00:17:28.018 Test: blob_clone_resize ...passed 00:17:28.018 Test: blob_shallow_copy ...[2024-05-15 11:10:46.550327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:17:28.018 [2024-05-15 11:10:46.550689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:17:28.018 [2024-05-15 11:10:46.551651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:17:28.018 passed 00:17:28.018 Suite: blob_blob_copy_noextent 00:17:28.018 Test: blob_write ...passed 00:17:28.018 Test: blob_read ...passed 00:17:28.282 Test: blob_rw_verify ...passed 00:17:28.282 Test: blob_rw_verify_iov_nomem ...passed 00:17:28.282 Test: blob_rw_iov_read_only ...passed 00:17:28.282 Test: blob_xattr ...passed 00:17:28.282 Test: blob_dirty_shutdown ...passed 00:17:28.282 Test: blob_is_degraded ...passed 00:17:28.282 Suite: blob_esnap_bs_copy_noextent 00:17:28.282 Test: blob_esnap_create ...passed 00:17:28.282 Test: blob_esnap_thread_add_remove ...passed 00:17:28.560 Test: blob_esnap_clone_snapshot ...passed 00:17:28.560 Test: blob_esnap_clone_inflate ...passed 00:17:28.560 Test: blob_esnap_clone_decouple ...passed 00:17:28.560 Test: blob_esnap_clone_reload ...passed 00:17:28.560 Test: blob_esnap_hotplug ...passed 00:17:28.560 Suite: blob_copy_extent 00:17:28.560 Test: blob_init ...[2024-05-15 11:10:47.081563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:17:28.560 passed 00:17:28.560 Test: blob_thin_provision ...passed 00:17:28.560 Test: blob_read_only ...passed 00:17:28.560 Test: bs_load ...[2024-05-15 11:10:47.129701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:17:28.560 passed 00:17:28.560 Test: bs_load_custom_cluster_size ...passed 00:17:28.560 Test: bs_load_after_failed_grow ...passed 00:17:28.560 Test: bs_cluster_sz ...[2024-05-15 11:10:47.158234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:17:28.560 [2024-05-15 11:10:47.158439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:17:28.560 [2024-05-15 11:10:47.158512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:17:28.560 passed 00:17:28.560 Test: bs_resize_md ...passed 00:17:28.818 Test: bs_destroy ...passed 00:17:28.818 Test: bs_type ...passed 00:17:28.818 Test: bs_super_block ...passed 00:17:28.818 Test: bs_test_recover_cluster_count ...passed 00:17:28.818 Test: bs_grow_live ...passed 00:17:28.818 Test: bs_grow_live_no_space ...passed 00:17:28.818 Test: bs_test_grow ...passed 00:17:28.818 Test: blob_serialize_test ...passed 00:17:28.818 Test: super_block_crc ...passed 00:17:28.818 Test: blob_thin_prov_write_count_io ...passed 00:17:28.818 Test: blob_thin_prov_unmap_cluster ...passed 00:17:28.818 Test: bs_load_iter_test ...passed 00:17:28.818 Test: blob_relations ...[2024-05-15 11:10:47.346544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.346681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 [2024-05-15 11:10:47.348045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.348111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 passed 00:17:28.818 Test: blob_relations2 ...[2024-05-15 11:10:47.364371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.364444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 [2024-05-15 11:10:47.364484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.364509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 [2024-05-15 11:10:47.365653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.365695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 [2024-05-15 11:10:47.366226] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:17:28.818 [2024-05-15 11:10:47.366264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:28.818 passed 00:17:28.818 Test: blob_relations3 ...passed 00:17:29.076 Test: blobstore_clean_power_failure ...passed 00:17:29.076 Test: blob_delete_snapshot_power_failure ...[2024-05-15 11:10:47.539906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:29.076 [2024-05-15 11:10:47.554916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:29.077 [2024-05-15 11:10:47.568983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:29.077 [2024-05-15 11:10:47.569127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:29.077 [2024-05-15 11:10:47.569185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 [2024-05-15 11:10:47.584408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:29.077 [2024-05-15 11:10:47.584533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:29.077 [2024-05-15 11:10:47.584592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:29.077 [2024-05-15 11:10:47.584650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 [2024-05-15 11:10:47.599072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:29.077 [2024-05-15 11:10:47.599210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:17:29.077 [2024-05-15 11:10:47.599240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:17:29.077 [2024-05-15 11:10:47.599266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 [2024-05-15 11:10:47.611653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:17:29.077 [2024-05-15 11:10:47.611786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 [2024-05-15 11:10:47.624271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:17:29.077 [2024-05-15 11:10:47.624409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 [2024-05-15 11:10:47.636649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:17:29.077 [2024-05-15 11:10:47.636750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:29.077 passed 00:17:29.077 Test: blob_create_snapshot_power_failure ...[2024-05-15 11:10:47.677446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:17:29.077 [2024-05-15 11:10:47.688892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:17:29.335 [2024-05-15 11:10:47.714207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:17:29.335 [2024-05-15 11:10:47.726037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:17:29.335 passed 00:17:29.335 Test: blob_io_unit ...passed 00:17:29.335 Test: blob_io_unit_compatibility ...passed 00:17:29.335 Test: blob_ext_md_pages ...passed 00:17:29.335 Test: blob_esnap_io_4096_4096 ...passed 00:17:29.335 Test: blob_esnap_io_512_512 ...passed 00:17:29.335 Test: blob_esnap_io_4096_512 ...passed 00:17:29.335 Test: blob_esnap_io_512_4096 ...passed 00:17:29.335 Test: blob_esnap_clone_resize ...passed 00:17:29.335 Suite: blob_bs_copy_extent 00:17:29.593 Test: blob_open ...passed 00:17:29.593 Test: blob_create ...[2024-05-15 11:10:47.998544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:17:29.593 passed 00:17:29.593 Test: blob_create_loop ...passed 00:17:29.593 Test: blob_create_fail ...[2024-05-15 11:10:48.107756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:29.593 passed 00:17:29.593 Test: blob_create_internal ...passed 00:17:29.593 Test: blob_create_zero_extent ...passed 00:17:29.593 Test: blob_snapshot ...passed 00:17:29.852 Test: blob_clone ...passed 00:17:29.852 Test: blob_inflate ...[2024-05-15 11:10:48.285527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:17:29.852 passed 00:17:29.852 Test: blob_delete ...passed 00:17:29.852 Test: blob_resize_test ...[2024-05-15 11:10:48.359209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:17:29.852 passed 00:17:29.852 Test: blob_resize_thin_test ...passed 00:17:29.852 Test: channel_ops ...passed 00:17:29.852 Test: blob_super ...passed 00:17:30.110 Test: blob_rw_verify_iov ...passed 00:17:30.110 Test: blob_unmap ...passed 00:17:30.110 Test: blob_iter ...passed 00:17:30.110 Test: blob_parse_md ...passed 00:17:30.110 Test: bs_load_pending_removal ...passed 00:17:30.110 Test: bs_unload ...[2024-05-15 11:10:48.664487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:17:30.110 passed 00:17:30.110 Test: bs_usable_clusters ...passed 00:17:30.110 Test: blob_crc ...[2024-05-15 11:10:48.729857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:30.110 [2024-05-15 11:10:48.730039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:17:30.110 passed 00:17:30.368 Test: blob_flags ...passed 00:17:30.368 Test: bs_version ...passed 00:17:30.368 Test: blob_set_xattrs_test ...[2024-05-15 11:10:48.837474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:30.368 [2024-05-15 11:10:48.837589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:17:30.368 passed 00:17:30.368 Test: blob_thin_prov_alloc ...passed 00:17:30.368 Test: blob_insert_cluster_msg_test ...passed 00:17:30.368 Test: blob_thin_prov_rw ...passed 00:17:30.637 Test: blob_thin_prov_rle ...passed 00:17:30.637 Test: blob_thin_prov_rw_iov ...passed 00:17:30.637 Test: blob_snapshot_rw ...passed 00:17:30.637 Test: blob_snapshot_rw_iov ...passed 00:17:30.901 Test: blob_inflate_rw ...passed 00:17:30.901 Test: blob_snapshot_freeze_io ...passed 00:17:30.901 Test: blob_operation_split_rw ...passed 00:17:31.159 Test: blob_operation_split_rw_iov ...passed 00:17:31.159 Test: blob_simultaneous_operations ...[2024-05-15 11:10:49.617303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:31.159 [2024-05-15 11:10:49.617462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:31.159 [2024-05-15 11:10:49.618164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:31.159 [2024-05-15 11:10:49.618264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:31.160 [2024-05-15 11:10:49.622352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:31.160 [2024-05-15 11:10:49.622428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:31.160 [2024-05-15 11:10:49.622582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:17:31.160 [2024-05-15 11:10:49.622620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:17:31.160 passed 00:17:31.160 Test: blob_persist_test ...passed 00:17:31.160 Test: blob_decouple_snapshot ...passed 00:17:31.160 Test: blob_seek_io_unit ...passed 00:17:31.160 Test: blob_nested_freezes ...passed 00:17:31.418 Test: blob_clone_resize ...passed 00:17:31.418 Test: blob_shallow_copy ...[2024-05-15 11:10:49.869078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:17:31.418 [2024-05-15 11:10:49.869454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:17:31.418 [2024-05-15 11:10:49.869706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:17:31.418 passed 00:17:31.418 Suite: blob_blob_copy_extent 00:17:31.418 Test: blob_write ...passed 00:17:31.418 Test: blob_read ...passed 00:17:31.418 Test: blob_rw_verify ...passed 00:17:31.418 Test: blob_rw_verify_iov_nomem ...passed 00:17:31.678 Test: blob_rw_iov_read_only ...passed 00:17:31.678 Test: blob_xattr ...passed 00:17:31.678 Test: blob_dirty_shutdown ...passed 00:17:31.678 Test: blob_is_degraded ...passed 00:17:31.678 Suite: blob_esnap_bs_copy_extent 00:17:31.678 Test: blob_esnap_create ...passed 00:17:31.678 Test: blob_esnap_thread_add_remove ...passed 00:17:31.678 Test: blob_esnap_clone_snapshot ...passed 00:17:31.982 Test: blob_esnap_clone_inflate ...passed 00:17:31.982 Test: blob_esnap_clone_decouple ...passed 00:17:31.982 Test: blob_esnap_clone_reload ...passed 00:17:31.982 Test: blob_esnap_hotplug ...passed 00:17:31.982 00:17:31.982 Run Summary: Type Total Ran Passed Failed Inactive 00:17:31.982 suites 16 16 n/a 0 0 00:17:31.982 tests 368 368 368 0 0 00:17:31.982 asserts 142985 142985 142985 0 n/a 00:17:31.982 00:17:31.982 Elapsed time = 13.250 seconds 00:17:31.982 11:10:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:17:31.982 00:17:31.982 00:17:31.982 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.982 http://cunit.sourceforge.net/ 00:17:31.982 00:17:31.982 00:17:31.982 Suite: blob_bdev 00:17:31.982 Test: create_bs_dev ...passed 00:17:31.982 Test: create_bs_dev_ro ...passed 00:17:31.982 Test: create_bs_dev_rw ...passed 00:17:31.982 Test: claim_bs_dev ...passed 00:17:31.982 Test: claim_bs_dev_ro ...passed 00:17:31.982 Test: deferred_destroy_refs ...passed 00:17:31.982 Test: deferred_destroy_channels ...passed 00:17:31.982 Test: deferred_destroy_threads ...passed 00:17:31.982 00:17:31.982 Run Summary: Type Total Ran Passed Failed Inactive 00:17:31.982 suites 1 1 n/a 0 0 00:17:31.982 tests 8 8 8 0 0 00:17:31.982 asserts 119 119 119 0 n/a 00:17:31.982 00:17:31.982 Elapsed time = 0.000 seconds 00:17:31.982 [2024-05-15 11:10:50.536202] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:17:31.982 [2024-05-15 11:10:50.536498] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:17:31.982 11:10:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:17:31.982 00:17:31.982 00:17:31.982 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.982 http://cunit.sourceforge.net/ 00:17:31.982 00:17:31.982 00:17:31.982 Suite: tree 00:17:31.982 Test: blobfs_tree_op_test ...passed 00:17:31.983 00:17:31.983 Run Summary: Type Total Ran Passed Failed Inactive 00:17:31.983 suites 1 1 n/a 0 0 00:17:31.983 tests 1 1 1 0 0 00:17:31.983 asserts 27 27 27 0 n/a 00:17:31.983 00:17:31.983 Elapsed time = 0.000 seconds 00:17:31.983 11:10:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:17:31.983 00:17:31.983 00:17:31.983 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.983 http://cunit.sourceforge.net/ 00:17:31.983 00:17:31.983 00:17:31.983 Suite: blobfs_async_ut 00:17:32.241 Test: fs_init ...passed 00:17:32.241 Test: fs_open ...passed 00:17:32.241 Test: fs_create ...passed 00:17:32.241 Test: fs_truncate ...passed 00:17:32.241 Test: fs_rename ...[2024-05-15 11:10:50.676545] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:17:32.241 passed 00:17:32.241 Test: fs_rw_async ...passed 00:17:32.241 Test: fs_writev_readv_async ...passed 00:17:32.241 Test: tree_find_buffer_ut ...passed 00:17:32.241 Test: channel_ops ...passed 00:17:32.241 Test: channel_ops_sync ...passed 00:17:32.241 00:17:32.241 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.241 suites 1 1 n/a 0 0 00:17:32.241 tests 10 10 10 0 0 00:17:32.241 asserts 292 292 292 0 n/a 00:17:32.241 00:17:32.241 Elapsed time = 0.150 seconds 00:17:32.241 11:10:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:17:32.241 00:17:32.241 00:17:32.241 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.241 http://cunit.sourceforge.net/ 00:17:32.241 00:17:32.241 00:17:32.241 Suite: blobfs_sync_ut 00:17:32.241 Test: cache_read_after_write ...[2024-05-15 11:10:50.831496] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:17:32.241 passed 00:17:32.241 Test: file_length ...passed 00:17:32.241 Test: append_write_to_extend_blob ...passed 00:17:32.500 Test: partial_buffer ...passed 00:17:32.500 Test: cache_write_null_buffer ...passed 00:17:32.500 Test: fs_create_sync ...passed 00:17:32.500 Test: fs_rename_sync ...passed 00:17:32.500 Test: cache_append_no_cache ...passed 00:17:32.500 Test: fs_delete_file_without_close ...passed 00:17:32.500 00:17:32.500 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.500 suites 1 1 n/a 0 0 00:17:32.500 tests 9 9 9 0 0 00:17:32.500 asserts 345 345 345 0 n/a 00:17:32.500 00:17:32.500 Elapsed time = 0.290 seconds 00:17:32.500 11:10:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:17:32.500 00:17:32.500 00:17:32.500 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.500 http://cunit.sourceforge.net/ 00:17:32.500 00:17:32.500 00:17:32.500 Suite: blobfs_bdev_ut 00:17:32.500 Test: spdk_blobfs_bdev_detect_test ...[2024-05-15 11:10:51.002830] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:17:32.500 passed 00:17:32.500 Test: spdk_blobfs_bdev_create_test ...passed 00:17:32.500 Test: spdk_blobfs_bdev_mount_test ...passed 00:17:32.500 00:17:32.500 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.500 suites 1 1 n/a 0 0 00:17:32.500 tests 3 3 3 0 0 00:17:32.500 asserts 9 9 9 0 n/a 00:17:32.500 00:17:32.500 Elapsed time = 0.000 seconds 00:17:32.500 [2024-05-15 11:10:51.003113] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:17:32.500 ************************************ 00:17:32.500 END TEST unittest_blob_blobfs 00:17:32.500 ************************************ 00:17:32.500 00:17:32.500 real 0m13.947s 00:17:32.500 user 0m13.243s 00:17:32.500 sys 0m0.791s 00:17:32.500 11:10:51 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.500 11:10:51 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:17:32.500 11:10:51 unittest -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:17:32.500 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:32.500 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.500 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:32.500 ************************************ 00:17:32.500 START TEST unittest_event 00:17:32.500 ************************************ 00:17:32.500 11:10:51 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:17:32.500 11:10:51 unittest.unittest_event -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:17:32.500 00:17:32.500 00:17:32.500 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.500 http://cunit.sourceforge.net/ 00:17:32.500 00:17:32.500 00:17:32.500 Suite: app_suite 00:17:32.500 Test: test_spdk_app_parse_args ...app_ut [options] 00:17:32.500 00:17:32.500 CPU options: 00:17:32.500 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:17:32.500 (like [0,1,10]) 00:17:32.500 --lcores lcore to CPU mapping list. The list is in the format: 00:17:32.500 [<,lcores[@CPUs]>...] 00:17:32.500 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:17:32.500 Within the group, '-' is used for range separator, 00:17:32.500 ',' is used for single number separator. 00:17:32.500 '( )' can be omitted for single element group, 00:17:32.500 '@' can be omitted if cpus and lcores have the same value 00:17:32.500 --disable-cpumask-locks Disable CPU core lock files. 00:17:32.500 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:17:32.500 pollers in the app support interrupt mode) 00:17:32.500 -p, --main-core main (primary) core for DPDK 00:17:32.500 00:17:32.500 Configuration options: 00:17:32.500 -c, --config, --json JSON config file 00:17:32.500 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:17:32.500 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:17:32.500 --wait-for-rpc wait for RPCs to initialize subsystems 00:17:32.500 --rpcs-allowed comma-separated list of permitted RPCS 00:17:32.500 --json-ignore-init-errors don't exit on invalid config entry 00:17:32.500 00:17:32.500 Memory options: 00:17:32.500 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:17:32.500 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:17:32.500 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:17:32.500 -R, --huge-unlink unlink huge files after initialization 00:17:32.500 -n, --mem-channels number of memory channels used for DPDK 00:17:32.500 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:17:32.500 --msg-mempool-size global message memory pool size in count (default: 262143) 00:17:32.500 --no-huge run without using hugepages 00:17:32.500 -i, --shm-id shared memory ID (optional) 00:17:32.500 -g, --single-file-segments force creating just one hugetlbfs file 00:17:32.500 00:17:32.500 PCI options: 00:17:32.500 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:17:32.500 -B, --pci-blocked pci addr to block (can be used more than once) 00:17:32.500 -u, --no-pci disable PCI access 00:17:32.500 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:17:32.500 00:17:32.500 Log options: 00:17:32.501 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:17:32.501 --silence-noticelog disable notice level logging to stderr 00:17:32.501 00:17:32.501 Trace options: 00:17:32.501 --num-trace-entries number of trace entries for each core, must be power of 2, 00:17:32.501 setting 0 to disable trace (default 32768) 00:17:32.501 Tracepoints vary in size and can use more than one trace entry. 00:17:32.501 -e, --tpoint-group [:] 00:17:32.501 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:17:32.501 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:17:32.501 a tracepoint group. First tpoint inside a group can be enabled by 00:17:32.501 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:17:32.501 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:17:32.501 in /include/spdk_internal/trace_defs.h 00:17:32.501 00:17:32.501 Other options: 00:17:32.501 -h, --help show this usage 00:17:32.501 -v, --version print SPDK version 00:17:32.501 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:17:32.501 --env-context Opaque context for use of the env implementation 00:17:32.501 app_ut [options] 00:17:32.501 00:17:32.501 CPU options: 00:17:32.501 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:17:32.501 (like [0,1,10]) 00:17:32.501 --lcores lcore to CPU mapping list. The list is in the format: 00:17:32.501 [<,lcores[@CPUs]>...] 00:17:32.501 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:17:32.501 Within the group, '-' is used for range separator, 00:17:32.501 ',' is used for single number separator. 00:17:32.501 '( )' can be omitted for single element group, 00:17:32.501 '@' can be omitted if cpus and lcores have the same value 00:17:32.501 --disable-cpumask-locks Disable CPU core lock files. 00:17:32.501 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:17:32.501 pollers in the app support interrupt mode) 00:17:32.501 -p, --main-core main (primary) core for DPDK 00:17:32.501 00:17:32.501 Configuration options: 00:17:32.501 -c, --config, --json JSON config file 00:17:32.501 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:17:32.501 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:17:32.501 --wait-for-rpc wait for RPCs to initialize subsystems 00:17:32.501 --rpcs-allowed comma-separated list of permitted RPCS 00:17:32.501 --json-ignore-init-errors don't exit on invalid config entry 00:17:32.501 00:17:32.501 Memory options: 00:17:32.501 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:17:32.501 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:17:32.501 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:17:32.501 -R, --huge-unlink unlink huge files after initialization 00:17:32.501 -n, --mem-channels number of memory channels used for DPDK 00:17:32.501 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:17:32.501 --msg-mempool-size global message memory pool size in count (default: 262143) 00:17:32.501 --no-huge run without using hugepages 00:17:32.501 -i, --shm-id shared memory ID (optional) 00:17:32.501 -g, --single-file-segments force creating just one hugetlbfs file 00:17:32.501 00:17:32.501 PCI options: 00:17:32.501 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:17:32.501 -B, --pci-blocked pci addr to block (can be used more than once) 00:17:32.501 -u, --no-pci disable PCI access 00:17:32.501 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:17:32.501 00:17:32.501 Log options: 00:17:32.501 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:17:32.501 --silence-noticelog disable notice level logging to stderr 00:17:32.501 00:17:32.501 Trace options: 00:17:32.501 --num-trace-entries number of trace entries for each core, must be power of 2, 00:17:32.501 setting 0 to disable trace (default 32768) 00:17:32.501 Tracepoints vary in size and can use more than one trace entry. 00:17:32.501 -e, --tpoint-group [:] 00:17:32.501 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:17:32.501 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:17:32.501 a tracepoint group. First tpoint inside a group can be enabled by 00:17:32.501 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:17:32.501 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:17:32.501 in /include/spdk_internal/trace_defs.h 00:17:32.501 00:17:32.501 Other options: 00:17:32.501 -h, --help show this usage 00:17:32.501 -v, --version print SPDK version 00:17:32.501 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:17:32.501 --env-context Opaque context for use of the env implementation 00:17:32.501 app_ut: invalid option -- 'z' 00:17:32.501 app_ut: unrecognized option '--test-long-opt' 00:17:32.501 [2024-05-15 11:10:51.076658] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:17:32.501 [2024-05-15 11:10:51.076916] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:17:32.501 app_ut [options] 00:17:32.501 00:17:32.501 CPU options: 00:17:32.501 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:17:32.501 (like [0,1,10]) 00:17:32.501 --lcores lcore to CPU mapping list. The list is in the format: 00:17:32.501 [<,lcores[@CPUs]>...] 00:17:32.501 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:17:32.501 Within the group, '-' is used for range separator, 00:17:32.501 ',' is used for single number separator. 00:17:32.501 '( )' can be omitted for single element group, 00:17:32.501 '@' can be omitted if cpus and lcores have the same value 00:17:32.501 --disable-cpumask-locks Disable CPU core lock files. 00:17:32.501 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:17:32.501 pollers in the app support interrupt mode) 00:17:32.501 -p, --main-core main (primary) core for DPDK 00:17:32.501 00:17:32.501 Configuration options: 00:17:32.501 -c, --config, --json JSON config file 00:17:32.501 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:17:32.501 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:17:32.501 --wait-for-rpc wait for RPCs to initialize subsystems 00:17:32.501 --rpcs-allowed comma-separated list of permitted RPCS 00:17:32.501 --json-ignore-init-errors don't exit on invalid config entry 00:17:32.501 00:17:32.501 Memory options: 00:17:32.501 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:17:32.501 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:17:32.501 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:17:32.501 -R, --huge-unlink unlink huge files after initialization 00:17:32.501 -n, --mem-channels number of memory channels used for DPDK 00:17:32.501 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:17:32.501 --msg-mempool-size global message memory pool size in count (default: 262143) 00:17:32.501 --no-huge run without using hugepages 00:17:32.501 -i, --shm-id shared memory ID (optional) 00:17:32.501 -g, --single-file-segments force creating just one hugetlbfs file 00:17:32.501 00:17:32.501 PCI options: 00:17:32.501 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:17:32.501 -B, --pci-blocked pci addr to block (can be used more than once) 00:17:32.501 -u, --no-pci disable PCI access 00:17:32.501 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:17:32.501 00:17:32.501 Log options: 00:17:32.501 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:17:32.501 --silence-noticelog disable notice level logging to stderr 00:17:32.501 00:17:32.501 Trace options: 00:17:32.501 --num-trace-entries number of trace entries for each core, must be power of 2, 00:17:32.501 setting 0 to disable trace (default 32768) 00:17:32.501 Tracepoints vary in size and can use more than one trace entry. 00:17:32.501 -e, --tpoint-group [:] 00:17:32.501 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:17:32.501 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:17:32.501 a tracepoint group. First tpoint inside a group can be enabled by 00:17:32.501 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:17:32.501 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:17:32.501 in /include/spdk_internal/trace_defs.h 00:17:32.501 00:17:32.501 Other options: 00:17:32.501 -h, --help show this usage 00:17:32.501 -v, --version print SPDK version 00:17:32.501 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:17:32.501 --env-context Opaque context for use of the env implementation 00:17:32.501 passed 00:17:32.501 00:17:32.501 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.501 suites 1 1 n/a 0 0 00:17:32.501 tests 1 1 1 0 0 00:17:32.501 asserts 8 8 8 0 n/a 00:17:32.501 00:17:32.501 Elapsed time = 0.000 seconds 00:17:32.502 [2024-05-15 11:10:51.077161] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:17:32.502 11:10:51 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:17:32.502 00:17:32.502 00:17:32.502 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.502 http://cunit.sourceforge.net/ 00:17:32.502 00:17:32.502 00:17:32.502 Suite: app_suite 00:17:32.502 Test: test_create_reactor ...passed 00:17:32.502 Test: test_init_reactors ...passed 00:17:32.502 Test: test_event_call ...passed 00:17:32.502 Test: test_schedule_thread ...passed 00:17:32.502 Test: test_reschedule_thread ...passed 00:17:32.502 Test: test_bind_thread ...passed 00:17:32.502 Test: test_for_each_reactor ...passed 00:17:32.502 Test: test_reactor_stats ...passed 00:17:32.502 Test: test_scheduler ...passed 00:17:32.502 Test: test_governor ...passed 00:17:32.502 00:17:32.502 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.502 suites 1 1 n/a 0 0 00:17:32.502 tests 10 10 10 0 0 00:17:32.502 asserts 344 344 344 0 n/a 00:17:32.502 00:17:32.502 Elapsed time = 0.010 seconds 00:17:32.502 ************************************ 00:17:32.502 END TEST unittest_event 00:17:32.502 ************************************ 00:17:32.502 00:17:32.502 real 0m0.063s 00:17:32.502 user 0m0.036s 00:17:32.502 sys 0m0.029s 00:17:32.502 11:10:51 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.502 11:10:51 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:17:32.760 11:10:51 unittest -- unit/unittest.sh@233 -- # uname -s 00:17:32.760 11:10:51 unittest -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:17:32.760 11:10:51 unittest -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:17:32.760 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:32.760 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.760 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:32.760 ************************************ 00:17:32.760 START TEST unittest_ftl 00:17:32.760 ************************************ 00:17:32.760 11:10:51 unittest.unittest_ftl -- common/autotest_common.sh@1121 -- # unittest_ftl 00:17:32.760 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:17:32.760 00:17:32.760 00:17:32.760 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.760 http://cunit.sourceforge.net/ 00:17:32.760 00:17:32.760 00:17:32.760 Suite: ftl_band_suite 00:17:32.760 Test: test_band_block_offset_from_addr_base ...passed 00:17:32.760 Test: test_band_block_offset_from_addr_offset ...passed 00:17:32.760 Test: test_band_addr_from_block_offset ...passed 00:17:32.760 Test: test_band_set_addr ...passed 00:17:32.760 Test: test_invalidate_addr ...passed 00:17:32.760 Test: test_next_xfer_addr ...passed 00:17:32.760 00:17:32.760 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.760 suites 1 1 n/a 0 0 00:17:32.760 tests 6 6 6 0 0 00:17:32.760 asserts 30356 30356 30356 0 n/a 00:17:32.760 00:17:32.760 Elapsed time = 0.180 seconds 00:17:33.019 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:17:33.019 00:17:33.019 00:17:33.019 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.019 http://cunit.sourceforge.net/ 00:17:33.019 00:17:33.019 00:17:33.019 Suite: ftl_bitmap 00:17:33.019 Test: test_ftl_bitmap_create ...passed 00:17:33.019 Test: test_ftl_bitmap_get ...passed 00:17:33.019 Test: test_ftl_bitmap_set ...passed 00:17:33.019 Test: test_ftl_bitmap_clear ...passed 00:17:33.019 Test: test_ftl_bitmap_find_first_set ...passed 00:17:33.019 Test: test_ftl_bitmap_find_first_clear ...passed 00:17:33.019 Test: test_ftl_bitmap_count_set ...passed 00:17:33.019 00:17:33.019 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.019 suites 1 1 n/a 0 0 00:17:33.019 tests 7 7 7 0 0 00:17:33.019 asserts 137 137 137 0 n/a 00:17:33.019 00:17:33.019 Elapsed time = 0.000 seconds 00:17:33.019 [2024-05-15 11:10:51.453861] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:17:33.019 [2024-05-15 11:10:51.454088] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:17:33.019 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:17:33.019 00:17:33.019 00:17:33.019 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.019 http://cunit.sourceforge.net/ 00:17:33.019 00:17:33.019 00:17:33.019 Suite: ftl_io_suite 00:17:33.019 Test: test_completion ...passed 00:17:33.019 Test: test_multiple_ios ...passed 00:17:33.019 00:17:33.019 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.019 suites 1 1 n/a 0 0 00:17:33.019 tests 2 2 2 0 0 00:17:33.019 asserts 47 47 47 0 n/a 00:17:33.019 00:17:33.019 Elapsed time = 0.010 seconds 00:17:33.019 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:17:33.019 00:17:33.019 00:17:33.019 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.019 http://cunit.sourceforge.net/ 00:17:33.019 00:17:33.019 00:17:33.019 Suite: ftl_mngt 00:17:33.019 Test: test_next_step ...passed 00:17:33.019 Test: test_continue_step ...passed 00:17:33.019 Test: test_get_func_and_step_cntx_alloc ...passed 00:17:33.019 Test: test_fail_step ...passed 00:17:33.019 Test: test_mngt_call_and_call_rollback ...passed 00:17:33.020 Test: test_nested_process_failure ...passed 00:17:33.020 00:17:33.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.020 suites 1 1 n/a 0 0 00:17:33.020 tests 6 6 6 0 0 00:17:33.020 asserts 176 176 176 0 n/a 00:17:33.020 00:17:33.020 Elapsed time = 0.000 seconds 00:17:33.020 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:17:33.020 00:17:33.020 00:17:33.020 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.020 http://cunit.sourceforge.net/ 00:17:33.020 00:17:33.020 00:17:33.020 Suite: ftl_mempool 00:17:33.020 Test: test_ftl_mempool_create ...passed 00:17:33.020 Test: test_ftl_mempool_get_put ...passed 00:17:33.020 00:17:33.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.020 suites 1 1 n/a 0 0 00:17:33.020 tests 2 2 2 0 0 00:17:33.020 asserts 36 36 36 0 n/a 00:17:33.020 00:17:33.020 Elapsed time = 0.000 seconds 00:17:33.020 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:17:33.020 00:17:33.020 00:17:33.020 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.020 http://cunit.sourceforge.net/ 00:17:33.020 00:17:33.020 00:17:33.020 Suite: ftl_addr64_suite 00:17:33.020 Test: test_addr_cached ...passed 00:17:33.020 00:17:33.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.020 suites 1 1 n/a 0 0 00:17:33.020 tests 1 1 1 0 0 00:17:33.020 asserts 1536 1536 1536 0 n/a 00:17:33.020 00:17:33.020 Elapsed time = 0.000 seconds 00:17:33.020 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:17:33.020 00:17:33.020 00:17:33.020 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.020 http://cunit.sourceforge.net/ 00:17:33.020 00:17:33.020 00:17:33.020 Suite: ftl_sb 00:17:33.020 Test: test_sb_crc_v2 ...passed 00:17:33.020 Test: test_sb_crc_v3 ...passed 00:17:33.020 Test: test_sb_v3_md_layout ...passed 00:17:33.020 Test: test_sb_v5_md_layout ...passed 00:17:33.020 00:17:33.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.020 suites 1 1 n/a 0 0 00:17:33.020 tests 4 4 4 0 0 00:17:33.020 asserts 148 148 148 0 n/a 00:17:33.020 00:17:33.020 Elapsed time = 0.000 seconds 00:17:33.020 [2024-05-15 11:10:51.572727] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:17:33.020 [2024-05-15 11:10:51.572980] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:17:33.020 [2024-05-15 11:10:51.573013] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:17:33.020 [2024-05-15 11:10:51.573047] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:17:33.020 [2024-05-15 11:10:51.573078] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:17:33.020 [2024-05-15 11:10:51.573154] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:17:33.020 [2024-05-15 11:10:51.573180] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:17:33.020 [2024-05-15 11:10:51.573221] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:17:33.020 [2024-05-15 11:10:51.573279] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:17:33.020 [2024-05-15 11:10:51.573317] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:17:33.020 [2024-05-15 11:10:51.573340] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:17:33.020 11:10:51 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:17:33.020 00:17:33.020 00:17:33.020 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.020 http://cunit.sourceforge.net/ 00:17:33.020 00:17:33.020 00:17:33.020 Suite: ftl_layout_upgrade 00:17:33.020 Test: test_l2p_upgrade ...passed 00:17:33.020 00:17:33.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.020 suites 1 1 n/a 0 0 00:17:33.020 tests 1 1 1 0 0 00:17:33.020 asserts 140 140 140 0 n/a 00:17:33.020 00:17:33.020 Elapsed time = 0.000 seconds 00:17:33.020 ************************************ 00:17:33.020 END TEST unittest_ftl 00:17:33.020 ************************************ 00:17:33.020 00:17:33.020 real 0m0.429s 00:17:33.020 user 0m0.180s 00:17:33.020 sys 0m0.251s 00:17:33.020 11:10:51 unittest.unittest_ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.020 11:10:51 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:17:33.020 11:10:51 unittest -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:17:33.020 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.020 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.020 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.020 ************************************ 00:17:33.020 START TEST unittest_accel 00:17:33.020 ************************************ 00:17:33.020 11:10:51 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:17:33.279 00:17:33.279 00:17:33.279 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.279 http://cunit.sourceforge.net/ 00:17:33.279 00:17:33.279 00:17:33.279 Suite: accel_sequence 00:17:33.279 Test: test_sequence_fill_copy ...passed 00:17:33.279 Test: test_sequence_abort ...passed 00:17:33.279 Test: test_sequence_append_error ...passed 00:17:33.279 Test: test_sequence_completion_error ...passed 00:17:33.279 Test: test_sequence_copy_elision ...[2024-05-15 11:10:51.664762] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1901:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f8359e9b7c0 00:17:33.279 [2024-05-15 11:10:51.665056] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1901:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f8359e9b7c0 00:17:33.279 [2024-05-15 11:10:51.665139] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1811:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f8359e9b7c0 00:17:33.279 [2024-05-15 11:10:51.665185] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1811:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f8359e9b7c0 00:17:33.279 passed 00:17:33.279 Test: test_sequence_accel_buffers ...passed 00:17:33.279 Test: test_sequence_memory_domain ...passed 00:17:33.279 Test: test_sequence_module_memory_domain ...[2024-05-15 11:10:51.669570] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1703:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:17:33.279 [2024-05-15 11:10:51.669698] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1742:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:17:33.279 passed 00:17:33.279 Test: test_sequence_driver ...passed 00:17:33.279 Test: test_sequence_same_iovs ...passed 00:17:33.279 Test: test_sequence_crc32 ...[2024-05-15 11:10:51.672455] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1850:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f83594f57c0 using driver: ut 00:17:33.279 [2024-05-15 11:10:51.672556] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1914:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f83594f57c0 through driver: ut 00:17:33.279 passed 00:17:33.279 Suite: accel 00:17:33.279 Test: test_spdk_accel_task_complete ...passed 00:17:33.279 Test: test_get_task ...passed 00:17:33.279 Test: test_spdk_accel_submit_copy ...passed 00:17:33.280 Test: test_spdk_accel_submit_dualcast ...passed 00:17:33.280 Test: test_spdk_accel_submit_compare ...passed 00:17:33.280 Test: test_spdk_accel_submit_fill ...passed 00:17:33.280 Test: test_spdk_accel_submit_crc32c ...passed 00:17:33.280 Test: test_spdk_accel_submit_crc32cv ...passed 00:17:33.280 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:17:33.280 Test: test_spdk_accel_submit_xor ...passed 00:17:33.280 Test: test_spdk_accel_module_find_by_name ...passed 00:17:33.280 Test: test_spdk_accel_module_register ...[2024-05-15 11:10:51.675850] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:17:33.280 [2024-05-15 11:10:51.675921] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:17:33.280 passed 00:17:33.280 00:17:33.280 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.280 suites 2 2 n/a 0 0 00:17:33.280 tests 23 23 23 0 0 00:17:33.280 asserts 750 750 750 0 n/a 00:17:33.280 00:17:33.280 Elapsed time = 0.020 seconds 00:17:33.280 ************************************ 00:17:33.280 END TEST unittest_accel 00:17:33.280 ************************************ 00:17:33.280 00:17:33.280 real 0m0.050s 00:17:33.280 user 0m0.022s 00:17:33.280 sys 0m0.028s 00:17:33.280 11:10:51 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.280 11:10:51 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 11:10:51 unittest -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 ************************************ 00:17:33.280 START TEST unittest_ioat 00:17:33.280 ************************************ 00:17:33.280 11:10:51 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:17:33.280 00:17:33.280 00:17:33.280 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.280 http://cunit.sourceforge.net/ 00:17:33.280 00:17:33.280 00:17:33.280 Suite: ioat 00:17:33.280 Test: ioat_state_check ...passed 00:17:33.280 00:17:33.280 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.280 suites 1 1 n/a 0 0 00:17:33.280 tests 1 1 1 0 0 00:17:33.280 asserts 32 32 32 0 n/a 00:17:33.280 00:17:33.280 Elapsed time = 0.000 seconds 00:17:33.280 ************************************ 00:17:33.280 END TEST unittest_ioat 00:17:33.280 ************************************ 00:17:33.280 00:17:33.280 real 0m0.032s 00:17:33.280 user 0m0.017s 00:17:33.280 sys 0m0.015s 00:17:33.280 11:10:51 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.280 11:10:51 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 11:10:51 unittest -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:33.280 11:10:51 unittest -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 ************************************ 00:17:33.280 START TEST unittest_idxd_user 00:17:33.280 ************************************ 00:17:33.280 11:10:51 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:17:33.280 00:17:33.280 00:17:33.280 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.280 http://cunit.sourceforge.net/ 00:17:33.280 00:17:33.280 00:17:33.280 Suite: idxd_user 00:17:33.280 Test: test_idxd_wait_cmd ...passed 00:17:33.280 Test: test_idxd_reset_dev ...passed 00:17:33.280 Test: test_idxd_group_config ...passed 00:17:33.280 Test: test_idxd_wq_config ...passed 00:17:33.280 00:17:33.280 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.280 suites 1 1 n/a 0 0 00:17:33.280 tests 4 4 4 0 0 00:17:33.280 asserts 20 20 20 0 n/a 00:17:33.280 00:17:33.280 Elapsed time = 0.000 seconds 00:17:33.280 [2024-05-15 11:10:51.826137] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:17:33.280 [2024-05-15 11:10:51.826406] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:17:33.280 [2024-05-15 11:10:51.826514] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:17:33.280 [2024-05-15 11:10:51.826552] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:17:33.280 ************************************ 00:17:33.280 END TEST unittest_idxd_user 00:17:33.280 ************************************ 00:17:33.280 00:17:33.280 real 0m0.027s 00:17:33.280 user 0m0.011s 00:17:33.280 sys 0m0.016s 00:17:33.280 11:10:51 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.280 11:10:51 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 11:10:51 unittest -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.280 11:10:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 ************************************ 00:17:33.280 START TEST unittest_iscsi 00:17:33.280 ************************************ 00:17:33.280 11:10:51 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:17:33.280 11:10:51 unittest.unittest_iscsi -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:17:33.280 00:17:33.280 00:17:33.280 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.280 http://cunit.sourceforge.net/ 00:17:33.280 00:17:33.280 00:17:33.280 Suite: conn_suite 00:17:33.280 Test: read_task_split_in_order_case ...passed 00:17:33.280 Test: read_task_split_reverse_order_case ...passed 00:17:33.280 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:17:33.280 Test: process_non_read_task_completion_test ...passed 00:17:33.280 Test: free_tasks_on_connection ...passed 00:17:33.280 Test: free_tasks_with_queued_datain ...passed 00:17:33.280 Test: abort_queued_datain_task_test ...passed 00:17:33.280 Test: abort_queued_datain_tasks_test ...passed 00:17:33.280 00:17:33.280 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.280 suites 1 1 n/a 0 0 00:17:33.280 tests 8 8 8 0 0 00:17:33.280 asserts 230 230 230 0 n/a 00:17:33.280 00:17:33.280 Elapsed time = 0.000 seconds 00:17:33.280 11:10:51 unittest.unittest_iscsi -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:17:33.540 00:17:33.540 00:17:33.540 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.540 http://cunit.sourceforge.net/ 00:17:33.540 00:17:33.540 00:17:33.540 Suite: iscsi_suite 00:17:33.540 Test: param_negotiation_test ...passed 00:17:33.540 Test: list_negotiation_test ...passed 00:17:33.540 Test: parse_valid_test ...passed 00:17:33.540 Test: parse_invalid_test ...passed 00:17:33.540 00:17:33.540 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.540 suites 1 1 n/a 0 0 00:17:33.540 tests 4 4 4 0 0 00:17:33.540 asserts 161 161 161 0 n/a 00:17:33.540 00:17:33.540 Elapsed time = 0.010 seconds 00:17:33.540 [2024-05-15 11:10:51.929077] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:17:33.540 [2024-05-15 11:10:51.929284] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:17:33.540 [2024-05-15 11:10:51.929330] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:17:33.540 [2024-05-15 11:10:51.929402] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:17:33.540 [2024-05-15 11:10:51.929534] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:17:33.540 [2024-05-15 11:10:51.929625] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:17:33.540 [2024-05-15 11:10:51.929710] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:17:33.540 11:10:51 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:17:33.540 00:17:33.540 00:17:33.540 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.540 http://cunit.sourceforge.net/ 00:17:33.540 00:17:33.540 00:17:33.540 Suite: iscsi_target_node_suite 00:17:33.540 Test: add_lun_test_cases ...[2024-05-15 11:10:51.954522] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:17:33.540 [2024-05-15 11:10:51.954849] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:17:33.540 passed 00:17:33.540 Test: allow_any_allowed ...passed 00:17:33.540 Test: allow_ipv6_allowed ...passed 00:17:33.540 Test: allow_ipv6_denied ...passed 00:17:33.540 Test: allow_ipv6_invalid ...passed 00:17:33.540 Test: allow_ipv4_allowed ...passed 00:17:33.540 Test: allow_ipv4_denied ...passed 00:17:33.540 Test: allow_ipv4_invalid ...passed 00:17:33.540 Test: node_access_allowed ...passed 00:17:33.540 Test: node_access_denied_by_empty_netmask ...passed 00:17:33.540 Test: node_access_multi_initiator_groups_cases ...[2024-05-15 11:10:51.954949] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:17:33.540 [2024-05-15 11:10:51.954992] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:17:33.540 [2024-05-15 11:10:51.955015] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:17:33.540 passed 00:17:33.540 Test: allow_iscsi_name_multi_maps_case ...passed 00:17:33.540 Test: chap_param_test_cases ...passed 00:17:33.540 00:17:33.540 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.540 suites 1 1 n/a 0 0 00:17:33.540 tests 13 13 13 0 0 00:17:33.540 asserts 50 50 50 0 n/a 00:17:33.540 00:17:33.540 Elapsed time = 0.000 seconds 00:17:33.540 [2024-05-15 11:10:51.955312] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:17:33.540 [2024-05-15 11:10:51.955345] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:17:33.540 [2024-05-15 11:10:51.955402] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:17:33.540 [2024-05-15 11:10:51.955429] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:17:33.540 [2024-05-15 11:10:51.955462] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:17:33.540 11:10:51 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:17:33.540 00:17:33.540 00:17:33.540 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.540 http://cunit.sourceforge.net/ 00:17:33.540 00:17:33.540 00:17:33.540 Suite: iscsi_suite 00:17:33.540 Test: op_login_check_target_test ...passed 00:17:33.540 Test: op_login_session_normal_test ...[2024-05-15 11:10:51.979591] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:17:33.540 [2024-05-15 11:10:51.979872] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:17:33.540 [2024-05-15 11:10:51.979910] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:17:33.540 [2024-05-15 11:10:51.979945] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:17:33.540 [2024-05-15 11:10:51.979995] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:17:33.540 [2024-05-15 11:10:51.980085] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:17:33.540 [2024-05-15 11:10:51.980205] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:17:33.540 [2024-05-15 11:10:51.980289] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:17:33.540 passed 00:17:33.540 Test: maxburstlength_test ...[2024-05-15 11:10:51.980640] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:17:33.540 [2024-05-15 11:10:51.980742] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4554:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:17:33.540 passed 00:17:33.540 Test: underflow_for_read_transfer_test ...passed 00:17:33.540 Test: underflow_for_zero_read_transfer_test ...passed 00:17:33.540 Test: underflow_for_request_sense_test ...passed 00:17:33.540 Test: underflow_for_check_condition_test ...passed 00:17:33.540 Test: add_transfer_task_test ...passed 00:17:33.540 Test: get_transfer_task_test ...passed 00:17:33.540 Test: del_transfer_task_test ...passed 00:17:33.540 Test: clear_all_transfer_tasks_test ...passed 00:17:33.540 Test: build_iovs_test ...passed 00:17:33.540 Test: build_iovs_with_md_test ...passed 00:17:33.540 Test: pdu_hdr_op_login_test ...[2024-05-15 11:10:51.981784] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:17:33.540 [2024-05-15 11:10:51.981946] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:17:33.540 [2024-05-15 11:10:51.982084] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:17:33.540 passed 00:17:33.540 Test: pdu_hdr_op_text_test ...[2024-05-15 11:10:51.982180] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2246:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:17:33.540 [2024-05-15 11:10:51.982296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:17:33.540 [2024-05-15 11:10:51.982367] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2291:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:17:33.540 passed 00:17:33.540 Test: pdu_hdr_op_logout_test ...passed 00:17:33.540 Test: pdu_hdr_op_scsi_test ...[2024-05-15 11:10:51.982457] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2521:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:17:33.540 [2024-05-15 11:10:51.982629] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:17:33.540 [2024-05-15 11:10:51.982679] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:17:33.540 [2024-05-15 11:10:51.982750] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:17:33.540 [2024-05-15 11:10:51.982869] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3403:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:17:33.540 [2024-05-15 11:10:51.982950] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3410:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:17:33.540 [2024-05-15 11:10:51.983064] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:17:33.540 passed 00:17:33.540 Test: pdu_hdr_op_task_mgmt_test ...[2024-05-15 11:10:51.983169] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:17:33.540 [2024-05-15 11:10:51.983246] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:17:33.541 passed 00:17:33.541 Test: pdu_hdr_op_nopout_test ...passed 00:17:33.541 Test: pdu_hdr_op_data_test ...[2024-05-15 11:10:51.983417] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:17:33.541 [2024-05-15 11:10:51.983517] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:17:33.541 [2024-05-15 11:10:51.983566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:17:33.541 [2024-05-15 11:10:51.983609] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:17:33.541 [2024-05-15 11:10:51.983681] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:17:33.541 [2024-05-15 11:10:51.983779] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:17:33.541 [2024-05-15 11:10:51.983890] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:17:33.541 [2024-05-15 11:10:51.983987] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:17:33.541 [2024-05-15 11:10:51.984037] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:17:33.541 [2024-05-15 11:10:51.984105] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:17:33.541 [2024-05-15 11:10:51.984159] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4249:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:17:33.541 passed 00:17:33.541 Test: empty_text_with_cbit_test ...passed 00:17:33.541 Test: pdu_payload_read_test ...[2024-05-15 11:10:51.985539] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4637:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:17:33.541 passed 00:17:33.541 Test: data_out_pdu_sequence_test ...passed 00:17:33.541 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 24 24 24 0 0 00:17:33.541 asserts 150253 150253 150253 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.020 seconds 00:17:33.541 11:10:51 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:17:33.541 00:17:33.541 00:17:33.541 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.541 http://cunit.sourceforge.net/ 00:17:33.541 00:17:33.541 00:17:33.541 Suite: init_grp_suite 00:17:33.541 Test: create_initiator_group_success_case ...passed 00:17:33.541 Test: find_initiator_group_success_case ...passed 00:17:33.541 Test: register_initiator_group_twice_case ...passed 00:17:33.541 Test: add_initiator_name_success_case ...passed 00:17:33.541 Test: add_initiator_name_fail_case ...passed 00:17:33.541 Test: delete_all_initiator_names_success_case ...passed 00:17:33.541 Test: add_netmask_success_case ...passed 00:17:33.541 Test: add_netmask_fail_case ...passed 00:17:33.541 Test: delete_all_netmasks_success_case ...passed 00:17:33.541 Test: initiator_name_overwrite_all_to_any_case ...passed 00:17:33.541 Test: netmask_overwrite_all_to_any_case ...passed 00:17:33.541 Test: add_delete_initiator_names_case ...passed 00:17:33.541 Test: add_duplicated_initiator_names_case ...passed 00:17:33.541 Test: delete_nonexisting_initiator_names_case ...passed 00:17:33.541 Test: add_delete_netmasks_case ...passed 00:17:33.541 Test: add_duplicated_netmasks_case ...passed 00:17:33.541 Test: delete_nonexisting_netmasks_case ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 17 17 17 0 0 00:17:33.541 asserts 108 108 108 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.000 seconds 00:17:33.541 [2024-05-15 11:10:52.017107] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:17:33.541 [2024-05-15 11:10:52.017382] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:17:33.541 11:10:52 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:17:33.541 00:17:33.541 00:17:33.541 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.541 http://cunit.sourceforge.net/ 00:17:33.541 00:17:33.541 00:17:33.541 Suite: portal_grp_suite 00:17:33.541 Test: portal_create_ipv4_normal_case ...passed 00:17:33.541 Test: portal_create_ipv6_normal_case ...passed 00:17:33.541 Test: portal_create_ipv4_wildcard_case ...passed 00:17:33.541 Test: portal_create_ipv6_wildcard_case ...passed 00:17:33.541 Test: portal_create_twice_case ...passed 00:17:33.541 Test: portal_grp_register_unregister_case ...passed 00:17:33.541 Test: portal_grp_register_twice_case ...passed 00:17:33.541 Test: portal_grp_add_delete_case ...[2024-05-15 11:10:52.037055] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:17:33.541 passed 00:17:33.541 Test: portal_grp_add_delete_twice_case ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 9 9 9 0 0 00:17:33.541 asserts 44 44 44 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.000 seconds 00:17:33.541 ************************************ 00:17:33.541 END TEST unittest_iscsi 00:17:33.541 ************************************ 00:17:33.541 00:17:33.541 real 0m0.169s 00:17:33.541 user 0m0.089s 00:17:33.541 sys 0m0.082s 00:17:33.541 11:10:52 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.541 11:10:52 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:17:33.541 11:10:52 unittest -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:17:33.541 11:10:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.541 11:10:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.541 11:10:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.541 ************************************ 00:17:33.541 START TEST unittest_json 00:17:33.541 ************************************ 00:17:33.541 11:10:52 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:17:33.541 11:10:52 unittest.unittest_json -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:17:33.541 00:17:33.541 00:17:33.541 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.541 http://cunit.sourceforge.net/ 00:17:33.541 00:17:33.541 00:17:33.541 Suite: json 00:17:33.541 Test: test_parse_literal ...passed 00:17:33.541 Test: test_parse_string_simple ...passed 00:17:33.541 Test: test_parse_string_control_chars ...passed 00:17:33.541 Test: test_parse_string_utf8 ...passed 00:17:33.541 Test: test_parse_string_escapes_twochar ...passed 00:17:33.541 Test: test_parse_string_escapes_unicode ...passed 00:17:33.541 Test: test_parse_number ...passed 00:17:33.541 Test: test_parse_array ...passed 00:17:33.541 Test: test_parse_object ...passed 00:17:33.541 Test: test_parse_nesting ...passed 00:17:33.541 Test: test_parse_comment ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 11 11 11 0 0 00:17:33.541 asserts 1516 1516 1516 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.000 seconds 00:17:33.541 11:10:52 unittest.unittest_json -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:17:33.541 00:17:33.541 00:17:33.541 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.541 http://cunit.sourceforge.net/ 00:17:33.541 00:17:33.541 00:17:33.541 Suite: json 00:17:33.541 Test: test_strequal ...passed 00:17:33.541 Test: test_num_to_uint16 ...passed 00:17:33.541 Test: test_num_to_int32 ...passed 00:17:33.541 Test: test_num_to_uint64 ...passed 00:17:33.541 Test: test_decode_object ...passed 00:17:33.541 Test: test_decode_array ...passed 00:17:33.541 Test: test_decode_bool ...passed 00:17:33.541 Test: test_decode_uint16 ...passed 00:17:33.541 Test: test_decode_int32 ...passed 00:17:33.541 Test: test_decode_uint32 ...passed 00:17:33.541 Test: test_decode_uint64 ...passed 00:17:33.541 Test: test_decode_string ...passed 00:17:33.541 Test: test_decode_uuid ...passed 00:17:33.541 Test: test_find ...passed 00:17:33.541 Test: test_find_array ...passed 00:17:33.541 Test: test_iterating ...passed 00:17:33.541 Test: test_free_object ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 17 17 17 0 0 00:17:33.541 asserts 236 236 236 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.000 seconds 00:17:33.541 11:10:52 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:17:33.541 00:17:33.541 00:17:33.541 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.541 http://cunit.sourceforge.net/ 00:17:33.541 00:17:33.541 00:17:33.541 Suite: json 00:17:33.541 Test: test_write_literal ...passed 00:17:33.541 Test: test_write_string_simple ...passed 00:17:33.541 Test: test_write_string_escapes ...passed 00:17:33.541 Test: test_write_string_utf16le ...passed 00:17:33.541 Test: test_write_number_int32 ...passed 00:17:33.541 Test: test_write_number_uint32 ...passed 00:17:33.541 Test: test_write_number_uint128 ...passed 00:17:33.541 Test: test_write_string_number_uint128 ...passed 00:17:33.541 Test: test_write_number_int64 ...passed 00:17:33.541 Test: test_write_number_uint64 ...passed 00:17:33.541 Test: test_write_number_double ...passed 00:17:33.541 Test: test_write_uuid ...passed 00:17:33.541 Test: test_write_array ...passed 00:17:33.541 Test: test_write_object ...passed 00:17:33.541 Test: test_write_nesting ...passed 00:17:33.541 Test: test_write_val ...passed 00:17:33.541 00:17:33.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.541 suites 1 1 n/a 0 0 00:17:33.541 tests 16 16 16 0 0 00:17:33.541 asserts 918 918 918 0 n/a 00:17:33.541 00:17:33.541 Elapsed time = 0.000 seconds 00:17:33.542 11:10:52 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:17:33.801 00:17:33.801 00:17:33.801 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.801 http://cunit.sourceforge.net/ 00:17:33.801 00:17:33.801 00:17:33.801 Suite: jsonrpc 00:17:33.801 Test: test_parse_request ...passed 00:17:33.801 Test: test_parse_request_streaming ...passed 00:17:33.801 00:17:33.801 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.801 suites 1 1 n/a 0 0 00:17:33.801 tests 2 2 2 0 0 00:17:33.801 asserts 289 289 289 0 n/a 00:17:33.801 00:17:33.801 Elapsed time = 0.000 seconds 00:17:33.801 ************************************ 00:17:33.801 END TEST unittest_json 00:17:33.801 ************************************ 00:17:33.801 00:17:33.801 real 0m0.098s 00:17:33.801 user 0m0.050s 00:17:33.801 sys 0m0.049s 00:17:33.801 11:10:52 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.801 11:10:52 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 11:10:52 unittest -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 ************************************ 00:17:33.801 START TEST unittest_rpc 00:17:33.801 ************************************ 00:17:33.801 11:10:52 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:17:33.801 11:10:52 unittest.unittest_rpc -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:17:33.801 00:17:33.801 00:17:33.801 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.801 http://cunit.sourceforge.net/ 00:17:33.801 00:17:33.801 00:17:33.801 Suite: rpc 00:17:33.801 Test: test_jsonrpc_handler ...passed 00:17:33.801 Test: test_spdk_rpc_is_method_allowed ...passed 00:17:33.801 Test: test_rpc_get_methods ...passed 00:17:33.801 Test: test_rpc_spdk_get_version ...passed 00:17:33.801 Test: test_spdk_rpc_listen_close ...passed 00:17:33.801 Test: test_rpc_run_multiple_servers ...passed 00:17:33.801 00:17:33.801 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.801 suites 1 1 n/a 0 0 00:17:33.801 tests 6 6 6 0 0 00:17:33.801 asserts 23 23 23 0 n/a 00:17:33.801 00:17:33.801 Elapsed time = 0.000 seconds 00:17:33.801 [2024-05-15 11:10:52.253425] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:17:33.801 ************************************ 00:17:33.801 END TEST unittest_rpc 00:17:33.801 ************************************ 00:17:33.801 00:17:33.801 real 0m0.026s 00:17:33.801 user 0m0.010s 00:17:33.801 sys 0m0.016s 00:17:33.801 11:10:52 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.801 11:10:52 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 11:10:52 unittest -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 ************************************ 00:17:33.801 START TEST unittest_notify 00:17:33.801 ************************************ 00:17:33.801 11:10:52 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:17:33.801 00:17:33.801 00:17:33.801 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.801 http://cunit.sourceforge.net/ 00:17:33.801 00:17:33.801 00:17:33.801 Suite: app_suite 00:17:33.801 Test: notify ...passed 00:17:33.801 00:17:33.801 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.801 suites 1 1 n/a 0 0 00:17:33.801 tests 1 1 1 0 0 00:17:33.801 asserts 13 13 13 0 n/a 00:17:33.801 00:17:33.801 Elapsed time = 0.000 seconds 00:17:33.801 ************************************ 00:17:33.801 END TEST unittest_notify 00:17:33.801 ************************************ 00:17:33.801 00:17:33.801 real 0m0.026s 00:17:33.801 user 0m0.013s 00:17:33.801 sys 0m0.013s 00:17:33.801 11:10:52 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.801 11:10:52 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 11:10:52 unittest -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.801 11:10:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 ************************************ 00:17:33.801 START TEST unittest_nvme 00:17:33.801 ************************************ 00:17:33.801 11:10:52 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:17:33.801 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:17:33.801 00:17:33.801 00:17:33.801 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.801 http://cunit.sourceforge.net/ 00:17:33.801 00:17:33.801 00:17:33.801 Suite: nvme 00:17:33.801 Test: test_opc_data_transfer ...passed 00:17:33.801 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:17:33.801 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:17:33.801 Test: test_trid_parse_and_compare ...passed 00:17:33.801 Test: test_trid_trtype_str ...passed 00:17:33.801 Test: test_trid_adrfam_str ...passed 00:17:33.801 Test: test_nvme_ctrlr_probe ...passed 00:17:33.801 Test: test_spdk_nvme_probe ...[2024-05-15 11:10:52.395954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:17:33.801 [2024-05-15 11:10:52.396215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:17:33.801 [2024-05-15 11:10:52.396309] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1188:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:17:33.801 [2024-05-15 11:10:52.396348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:17:33.801 [2024-05-15 11:10:52.396378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:17:33.801 [2024-05-15 11:10:52.396450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:17:33.801 [2024-05-15 11:10:52.396741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:17:33.801 [2024-05-15 11:10:52.396856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:17:33.801 passed 00:17:33.801 Test: test_spdk_nvme_connect ...[2024-05-15 11:10:52.396892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:17:33.801 [2024-05-15 11:10:52.396925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:17:33.801 [2024-05-15 11:10:52.396950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:17:33.801 [2024-05-15 11:10:52.397023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:17:33.801 passed 00:17:33.801 Test: test_nvme_ctrlr_probe_internal ...passed 00:17:33.801 Test: test_nvme_init_controllers ...[2024-05-15 11:10:52.397196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:17:33.801 [2024-05-15 11:10:52.397248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:17:33.801 [2024-05-15 11:10:52.397368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:17:33.801 [2024-05-15 11:10:52.397404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:33.801 passed 00:17:33.801 Test: test_nvme_driver_init ...[2024-05-15 11:10:52.397460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:17:33.801 [2024-05-15 11:10:52.397522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:17:33.801 [2024-05-15 11:10:52.397547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:17:34.060 passed 00:17:34.060 Test: test_spdk_nvme_detach ...passed 00:17:34.060 Test: test_nvme_completion_poll_cb ...passed 00:17:34.060 Test: test_nvme_user_copy_cmd_complete ...passed 00:17:34.060 Test: test_nvme_allocate_request_null ...passed 00:17:34.060 Test: test_nvme_allocate_request ...passed 00:17:34.060 Test: test_nvme_free_request ...passed 00:17:34.060 Test: test_nvme_allocate_request_user_copy ...passed 00:17:34.060 Test: test_nvme_robust_mutex_init_shared ...passed 00:17:34.060 Test: test_nvme_request_check_timeout ...passed 00:17:34.060 Test: test_nvme_wait_for_completion ...passed 00:17:34.060 Test: test_spdk_nvme_parse_func ...passed 00:17:34.060 Test: test_spdk_nvme_detach_async ...[2024-05-15 11:10:52.510103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:17:34.060 [2024-05-15 11:10:52.510294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:17:34.060 passed 00:17:34.060 Test: test_nvme_parse_addr ...passed 00:17:34.060 00:17:34.060 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.060 suites 1 1 n/a 0 0 00:17:34.060 tests 25 25 25 0 0 00:17:34.060 asserts 326 326 326 0 n/a 00:17:34.060 00:17:34.060 Elapsed time = 0.020 seconds 00:17:34.060 [2024-05-15 11:10:52.511038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:17:34.060 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:17:34.060 00:17:34.060 00:17:34.060 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.060 http://cunit.sourceforge.net/ 00:17:34.060 00:17:34.060 00:17:34.060 Suite: nvme_ctrlr 00:17:34.060 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-15 11:10:52.537313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-15 11:10:52.539004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-15 11:10:52.540250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-15 11:10:52.541531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-15 11:10:52.542802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.543984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 11:10:52.545185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 11:10:52.546348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-15 11:10:52.548741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.550971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 11:10:52.552132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:17:34.060 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-15 11:10:52.554521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.555735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 11:10:52.558014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:17:34.060 Test: test_nvme_ctrlr_init_delay ...[2024-05-15 11:10:52.560424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 passed 00:17:34.060 Test: test_alloc_io_qpair_rr_1 ...[2024-05-15 11:10:52.561704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.561913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:17:34.060 [2024-05-15 11:10:52.562062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:17:34.060 [2024-05-15 11:10:52.562134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:17:34.060 [2024-05-15 11:10:52.562187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:17:34.060 passed 00:17:34.060 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:17:34.060 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:17:34.060 Test: test_alloc_io_qpair_wrr_1 ...passed 00:17:34.060 Test: test_alloc_io_qpair_wrr_2 ...[2024-05-15 11:10:52.562354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.562447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.060 [2024-05-15 11:10:52.562538] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:17:34.060 passed 00:17:34.060 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-05-15 11:10:52.562660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_fail ...passed 00:17:34.060 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:17:34.060 Test: test_nvme_ctrlr_set_supported_features ...passed 00:17:34.060 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...[2024-05-15 11:10:52.562790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:17:34.060 [2024-05-15 11:10:52.562874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:17:34.060 [2024-05-15 11:10:52.562917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:17:34.060 [2024-05-15 11:10:52.562963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:17:34.060 passed 00:17:34.060 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-15 11:10:52.563192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:17:34.319 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:17:34.319 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:17:34.319 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-15 11:10:52.750490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-15 11:10:52.757574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-15 11:10:52.758852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 [2024-05-15 11:10:52.758990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2883:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:17:34.319 passed 00:17:34.319 Test: test_alloc_io_qpair_fail ...passed 00:17:34.319 Test: test_nvme_ctrlr_add_remove_process ...passed 00:17:34.319 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:17:34.319 Test: test_nvme_ctrlr_set_state ...passed 00:17:34.319 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-15 11:10:52.760173] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 [2024-05-15 11:10:52.760303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:17:34.319 [2024-05-15 11:10:52.760433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:17:34.319 [2024-05-15 11:10:52.760506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-15 11:10:52.784130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-15 11:10:52.816678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_reset ...passed 00:17:34.319 Test: test_nvme_ctrlr_aer_callback ...[2024-05-15 11:10:52.818078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 [2024-05-15 11:10:52.818431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-15 11:10:52.819835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:17:34.319 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:17:34.319 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-15 11:10:52.821336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:17:34.319 Test: test_nvme_ctrlr_ana_resize ...[2024-05-15 11:10:52.822617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:17:34.319 Test: test_nvme_transport_ctrlr_ready ...passed 00:17:34.319 Test: test_nvme_ctrlr_disable ...[2024-05-15 11:10:52.823997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:17:34.319 [2024-05-15 11:10:52.824054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4080:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:17:34.319 [2024-05-15 11:10:52.824091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:17:34.319 passed 00:17:34.319 00:17:34.319 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.319 suites 1 1 n/a 0 0 00:17:34.319 tests 43 43 43 0 0 00:17:34.319 asserts 10418 10418 10418 0 n/a 00:17:34.319 00:17:34.319 Elapsed time = 0.240 seconds 00:17:34.319 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:17:34.319 00:17:34.319 00:17:34.319 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.319 http://cunit.sourceforge.net/ 00:17:34.319 00:17:34.319 00:17:34.319 Suite: nvme_ctrlr_cmd 00:17:34.319 Test: test_get_log_pages ...passed 00:17:34.319 Test: test_set_feature_cmd ...passed 00:17:34.319 Test: test_set_feature_ns_cmd ...passed 00:17:34.319 Test: test_get_feature_cmd ...passed 00:17:34.319 Test: test_get_feature_ns_cmd ...passed 00:17:34.319 Test: test_abort_cmd ...passed 00:17:34.319 Test: test_set_host_id_cmds ...passed 00:17:34.319 Test: test_io_cmd_raw_no_payload_build ...passed 00:17:34.319 Test: test_io_raw_cmd ...passed 00:17:34.319 Test: test_io_raw_cmd_with_md ...passed 00:17:34.319 Test: test_namespace_attach ...passed 00:17:34.319 Test: test_namespace_detach ...passed 00:17:34.319 Test: test_namespace_create ...passed 00:17:34.319 Test: test_namespace_delete ...passed 00:17:34.319 Test: test_doorbell_buffer_config ...passed 00:17:34.320 Test: test_format_nvme ...passed 00:17:34.320 Test: test_fw_commit ...passed 00:17:34.320 Test: test_fw_image_download ...passed 00:17:34.320 Test: test_sanitize ...passed 00:17:34.320 Test: test_directive ...passed 00:17:34.320 Test: test_nvme_request_add_abort ...passed 00:17:34.320 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:17:34.320 Test: test_nvme_ctrlr_cmd_identify ...passed 00:17:34.320 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:17:34.320 00:17:34.320 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.320 suites 1 1 n/a 0 0 00:17:34.320 tests 24 24 24 0 0 00:17:34.320 asserts 198 198 198 0 n/a 00:17:34.320 00:17:34.320 Elapsed time = 0.000 seconds 00:17:34.320 [2024-05-15 11:10:52.870521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:17:34.320 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:17:34.320 00:17:34.320 00:17:34.320 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.320 http://cunit.sourceforge.net/ 00:17:34.320 00:17:34.320 00:17:34.320 Suite: nvme_ctrlr_cmd 00:17:34.320 Test: test_geometry_cmd ...passed 00:17:34.320 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:17:34.320 00:17:34.320 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.320 suites 1 1 n/a 0 0 00:17:34.320 tests 2 2 2 0 0 00:17:34.320 asserts 7 7 7 0 n/a 00:17:34.320 00:17:34.320 Elapsed time = 0.000 seconds 00:17:34.320 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:17:34.320 00:17:34.320 00:17:34.320 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.320 http://cunit.sourceforge.net/ 00:17:34.320 00:17:34.320 00:17:34.320 Suite: nvme 00:17:34.320 Test: test_nvme_ns_construct ...passed 00:17:34.320 Test: test_nvme_ns_uuid ...passed 00:17:34.320 Test: test_nvme_ns_csi ...passed 00:17:34.320 Test: test_nvme_ns_data ...passed 00:17:34.320 Test: test_nvme_ns_set_identify_data ...passed 00:17:34.320 Test: test_spdk_nvme_ns_get_values ...passed 00:17:34.320 Test: test_spdk_nvme_ns_is_active ...passed 00:17:34.320 Test: spdk_nvme_ns_supports ...passed 00:17:34.320 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:17:34.320 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:17:34.320 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:17:34.320 Test: test_nvme_ns_find_id_desc ...passed 00:17:34.320 00:17:34.320 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.320 suites 1 1 n/a 0 0 00:17:34.320 tests 12 12 12 0 0 00:17:34.320 asserts 83 83 83 0 n/a 00:17:34.320 00:17:34.320 Elapsed time = 0.000 seconds 00:17:34.320 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:17:34.320 00:17:34.320 00:17:34.320 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.320 http://cunit.sourceforge.net/ 00:17:34.320 00:17:34.320 00:17:34.320 Suite: nvme_ns_cmd 00:17:34.320 Test: split_test ...passed 00:17:34.320 Test: split_test2 ...passed 00:17:34.320 Test: split_test3 ...passed 00:17:34.320 Test: split_test4 ...passed 00:17:34.320 Test: test_nvme_ns_cmd_flush ...passed 00:17:34.320 Test: test_nvme_ns_cmd_dataset_management ...passed 00:17:34.320 Test: test_nvme_ns_cmd_copy ...passed 00:17:34.320 Test: test_io_flags ...passed 00:17:34.320 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:17:34.320 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:17:34.320 Test: test_nvme_ns_cmd_reservation_register ...passed 00:17:34.320 Test: test_nvme_ns_cmd_reservation_release ...passed 00:17:34.320 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:17:34.320 Test: test_nvme_ns_cmd_reservation_report ...passed 00:17:34.320 Test: test_cmd_child_request ...passed 00:17:34.320 Test: test_nvme_ns_cmd_readv ...passed 00:17:34.320 Test: test_nvme_ns_cmd_read_with_md ...passed 00:17:34.320 Test: test_nvme_ns_cmd_writev ...passed 00:17:34.320 Test: test_nvme_ns_cmd_write_with_md ...passed 00:17:34.320 Test: test_nvme_ns_cmd_zone_append_with_md ...[2024-05-15 11:10:52.936200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:17:34.320 [2024-05-15 11:10:52.936942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:17:34.320 passed 00:17:34.320 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:17:34.320 Test: test_nvme_ns_cmd_comparev ...passed 00:17:34.320 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:17:34.320 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:17:34.320 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:17:34.320 Test: test_nvme_ns_cmd_setup_request ...passed 00:17:34.320 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:17:34.320 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:17:34.320 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:17:34.320 Test: test_nvme_ns_cmd_verify ...passed 00:17:34.320 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:17:34.320 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:17:34.320 00:17:34.320 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.320 suites 1 1 n/a 0 0 00:17:34.320 tests 32 32 32 0 0 00:17:34.320 asserts 550 550 550 0 n/a 00:17:34.320 00:17:34.320 Elapsed time = 0.000 seconds 00:17:34.320 [2024-05-15 11:10:52.938194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:17:34.320 [2024-05-15 11:10:52.938281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:17:34.320 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:17:34.579 00:17:34.579 00:17:34.579 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.579 http://cunit.sourceforge.net/ 00:17:34.579 00:17:34.579 00:17:34.579 Suite: nvme_ns_cmd 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:17:34.579 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:17:34.579 00:17:34.579 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.579 suites 1 1 n/a 0 0 00:17:34.579 tests 12 12 12 0 0 00:17:34.579 asserts 123 123 123 0 n/a 00:17:34.579 00:17:34.579 Elapsed time = 0.000 seconds 00:17:34.579 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:17:34.579 00:17:34.579 00:17:34.579 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.579 http://cunit.sourceforge.net/ 00:17:34.579 00:17:34.579 00:17:34.579 Suite: nvme_qpair 00:17:34.579 Test: test3 ...passed 00:17:34.579 Test: test_ctrlr_failed ...passed 00:17:34.579 Test: struct_packing ...passed 00:17:34.579 Test: test_nvme_qpair_process_completions ...passed 00:17:34.579 Test: test_nvme_completion_is_retry ...passed 00:17:34.579 Test: test_get_status_string ...passed 00:17:34.579 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:17:34.580 Test: test_nvme_qpair_submit_request ...passed 00:17:34.580 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:17:34.580 Test: test_nvme_qpair_manual_complete_request ...passed 00:17:34.580 Test: test_nvme_qpair_init_deinit ...passed 00:17:34.580 Test: test_nvme_get_sgl_print_info ...passed 00:17:34.580 00:17:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.580 suites 1 1 n/a 0 0 00:17:34.580 tests 12 12 12 0 0 00:17:34.580 asserts 154 154 154 0 n/a 00:17:34.580 00:17:34.580 Elapsed time = 0.000 seconds 00:17:34.580 [2024-05-15 11:10:52.984329] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:34.580 [2024-05-15 11:10:52.984593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:34.580 [2024-05-15 11:10:52.984660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.580 [2024-05-15 11:10:52.984739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:34.580 [2024-05-15 11:10:52.985030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:34.580 11:10:52 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:17:34.580 00:17:34.580 00:17:34.580 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.580 http://cunit.sourceforge.net/ 00:17:34.580 00:17:34.580 00:17:34.580 Suite: nvme_pcie 00:17:34.580 Test: test_prp_list_append ...passed 00:17:34.580 Test: test_nvme_pcie_hotplug_monitor ...passed 00:17:34.580 Test: test_shadow_doorbell_update ...passed 00:17:34.580 Test: test_build_contig_hw_sgl_request ...passed 00:17:34.580 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:17:34.580 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:17:34.580 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:17:34.580 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:17:34.580 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:17:34.580 00:17:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.580 suites 1 1 n/a 0 0 00:17:34.580 tests 14 14 14 0 0 00:17:34.580 asserts 235 235 235 0 n/a 00:17:34.580 00:17:34.580 Elapsed time = 0.010 seconds 00:17:34.580 [2024-05-15 11:10:53.010586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:17:34.580 [2024-05-15 11:10:53.010874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:17:34.580 [2024-05-15 11:10:53.010911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:17:34.580 [2024-05-15 11:10:53.011128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:17:34.580 [2024-05-15 11:10:53.011199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:17:34.580 [2024-05-15 11:10:53.011395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:17:34.580 [2024-05-15 11:10:53.011501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:17:34.580 [2024-05-15 11:10:53.011571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:17:34.580 [2024-05-15 11:10:53.011614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:17:34.580 [2024-05-15 11:10:53.011676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:17:34.580 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:17:34.580 00:17:34.580 00:17:34.580 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.580 http://cunit.sourceforge.net/ 00:17:34.580 00:17:34.580 00:17:34.580 Suite: nvme_ns_cmd 00:17:34.580 Test: nvme_poll_group_create_test ...passed 00:17:34.580 Test: nvme_poll_group_add_remove_test ...passed 00:17:34.580 Test: nvme_poll_group_process_completions ...passed 00:17:34.580 Test: nvme_poll_group_destroy_test ...passed 00:17:34.580 Test: nvme_poll_group_get_free_stats ...passed 00:17:34.580 00:17:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.580 suites 1 1 n/a 0 0 00:17:34.580 tests 5 5 5 0 0 00:17:34.580 asserts 75 75 75 0 n/a 00:17:34.580 00:17:34.580 Elapsed time = 0.000 seconds 00:17:34.580 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:17:34.580 00:17:34.580 00:17:34.580 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.580 http://cunit.sourceforge.net/ 00:17:34.580 00:17:34.580 00:17:34.580 Suite: nvme_quirks 00:17:34.580 Test: test_nvme_quirks_striping ...passed 00:17:34.580 00:17:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.580 suites 1 1 n/a 0 0 00:17:34.580 tests 1 1 1 0 0 00:17:34.580 asserts 5 5 5 0 n/a 00:17:34.580 00:17:34.580 Elapsed time = 0.000 seconds 00:17:34.580 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:17:34.580 00:17:34.580 00:17:34.580 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.580 http://cunit.sourceforge.net/ 00:17:34.580 00:17:34.580 00:17:34.580 Suite: nvme_tcp 00:17:34.580 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:17:34.580 Test: test_nvme_tcp_build_iovs ...passed 00:17:34.580 Test: test_nvme_tcp_build_sgl_request ...passed 00:17:34.580 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:17:34.580 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:17:34.580 Test: test_nvme_tcp_req_complete_safe ...passed 00:17:34.580 Test: test_nvme_tcp_req_get ...[2024-05-15 11:10:53.077532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd99561030, and the iovcnt=16, remaining_size=28672 00:17:34.580 passed 00:17:34.580 Test: test_nvme_tcp_req_init ...passed 00:17:34.580 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:17:34.580 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:17:34.580 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:17:34.580 Test: test_nvme_tcp_alloc_reqs ...[2024-05-15 11:10:53.078012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99562d40 is same with the state(6) to be set 00:17:34.580 passed 00:17:34.580 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:17:34.580 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:17:34.580 Test: test_nvme_tcp_qpair_connect_sock ...passed 00:17:34.580 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:17:34.580 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:17:34.580 Test: test_nvme_tcp_icresp_handle ...passed 00:17:34.580 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:17:34.580 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:17:34.580 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:17:34.580 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:17:34.580 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-15 11:10:53.078275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99561f00 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd99562a90 00:17:34.580 [2024-05-15 11:10:53.078386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1226:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:17:34.580 [2024-05-15 11:10:53.078452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:17:34.580 [2024-05-15 11:10:53.078559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:34.580 [2024-05-15 11:10:53.078617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd995623c0 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.078929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:17:34.580 [2024-05-15 11:10:53.078964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:17:34.580 [2024-05-15 11:10:53.079191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:17:34.580 [2024-05-15 11:10:53.079297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd995625d0): PDU Sequence Error 00:17:34.580 [2024-05-15 11:10:53.079361] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:17:34.580 [2024-05-15 11:10:53.079402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1574:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:17:34.580 [2024-05-15 11:10:53.079439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99561f00 is same with the state(5) to be set 00:17:34.580 [2024-05-15 11:10:53.079471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:17:34.581 [2024-05-15 11:10:53.079500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99561f00 is same with the state(5) to be set 00:17:34.581 [2024-05-15 11:10:53.079554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99561f00 is same with the state(0) to be set 00:17:34.581 [2024-05-15 11:10:53.079608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd99562a90): PDU Sequence Error 00:17:34.581 [2024-05-15 11:10:53.079687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd995611d0 00:17:34.581 [2024-05-15 11:10:53.079863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd99560850, errno=0, rc=0 00:17:34.581 [2024-05-15 11:10:53.079908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99560850 is same with the state(5) to be set 00:17:34.581 [2024-05-15 11:10:53.079958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd99560850 is same with the state(5) to be set 00:17:34.581 [2024-05-15 11:10:53.080010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd99560850 (0): Success 00:17:34.581 [2024-05-15 11:10:53.080049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd99560850 (0): Success 00:17:34.581 passed 00:17:34.581 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:17:34.581 Test: test_nvme_tcp_poll_group_get_stats ...[2024-05-15 11:10:53.140696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:17:34.581 [2024-05-15 11:10:53.140823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:17:34.581 passed 00:17:34.581 Test: test_nvme_tcp_ctrlr_construct ...[2024-05-15 11:10:53.141031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:34.581 [2024-05-15 11:10:53.141068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:34.581 [2024-05-15 11:10:53.141280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:17:34.581 [2024-05-15 11:10:53.141318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:34.581 passed 00:17:34.581 Test: test_nvme_tcp_qpair_submit_request ...passed 00:17:34.581 00:17:34.581 [2024-05-15 11:10:53.141388] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:17:34.581 [2024-05-15 11:10:53.141436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:34.581 [2024-05-15 11:10:53.141510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000000f80 with addr=192.168.1.78, port=23 00:17:34.581 [2024-05-15 11:10:53.141552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:34.581 [2024-05-15 11:10:53.141632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:17:34.581 [2024-05-15 11:10:53.141671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:17:34.581 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.581 suites 1 1 n/a 0 0 00:17:34.581 tests 27 27 27 0 0 00:17:34.581 asserts 624 624 624 0 n/a 00:17:34.581 00:17:34.581 Elapsed time = 0.070 seconds 00:17:34.581 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:17:34.581 00:17:34.581 00:17:34.581 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.581 http://cunit.sourceforge.net/ 00:17:34.581 00:17:34.581 00:17:34.581 Suite: nvme_transport 00:17:34.581 Test: test_nvme_get_transport ...passed 00:17:34.581 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:17:34.581 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:17:34.581 Test: test_nvme_transport_poll_group_add_remove ...passed 00:17:34.581 Test: test_ctrlr_get_memory_domains ...passed 00:17:34.581 00:17:34.581 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.581 suites 1 1 n/a 0 0 00:17:34.581 tests 5 5 5 0 0 00:17:34.581 asserts 28 28 28 0 n/a 00:17:34.581 00:17:34.581 Elapsed time = 0.000 seconds 00:17:34.581 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:17:34.581 00:17:34.581 00:17:34.581 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.581 http://cunit.sourceforge.net/ 00:17:34.581 00:17:34.581 00:17:34.581 Suite: nvme_io_msg 00:17:34.581 Test: test_nvme_io_msg_send ...passed 00:17:34.581 Test: test_nvme_io_msg_process ...passed 00:17:34.581 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:17:34.581 00:17:34.581 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.581 suites 1 1 n/a 0 0 00:17:34.581 tests 3 3 3 0 0 00:17:34.581 asserts 56 56 56 0 n/a 00:17:34.581 00:17:34.581 Elapsed time = 0.000 seconds 00:17:34.581 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:17:34.840 00:17:34.840 00:17:34.840 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.840 http://cunit.sourceforge.net/ 00:17:34.840 00:17:34.840 00:17:34.840 Suite: nvme_pcie_common 00:17:34.840 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-05-15 11:10:53.217786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:17:34.840 passed 00:17:34.840 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:17:34.840 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:17:34.840 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:17:34.840 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-05-15 11:10:53.218307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:17:34.840 [2024-05-15 11:10:53.218406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:17:34.840 [2024-05-15 11:10:53.218446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:17:34.840 passed 00:17:34.840 Test: test_nvme_pcie_poll_group_get_stats ...[2024-05-15 11:10:53.218721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:34.840 [2024-05-15 11:10:53.218777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:34.840 passed 00:17:34.840 00:17:34.840 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.840 suites 1 1 n/a 0 0 00:17:34.840 tests 6 6 6 0 0 00:17:34.840 asserts 148 148 148 0 n/a 00:17:34.840 00:17:34.840 Elapsed time = 0.000 seconds 00:17:34.840 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:17:34.840 00:17:34.840 00:17:34.840 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.840 http://cunit.sourceforge.net/ 00:17:34.840 00:17:34.840 00:17:34.840 Suite: nvme_fabric 00:17:34.840 Test: test_nvme_fabric_prop_set_cmd ...passed 00:17:34.840 Test: test_nvme_fabric_prop_get_cmd ...passed 00:17:34.840 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:17:34.840 Test: test_nvme_fabric_discover_probe ...passed 00:17:34.840 Test: test_nvme_fabric_qpair_connect ...passed 00:17:34.840 00:17:34.840 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.840 suites 1 1 n/a 0 0 00:17:34.840 tests 5 5 5 0 0 00:17:34.840 asserts 60 60 60 0 n/a 00:17:34.840 00:17:34.840 Elapsed time = 0.000 seconds 00:17:34.840 [2024-05-15 11:10:53.243446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:17:34.840 11:10:53 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:17:34.840 00:17:34.840 00:17:34.840 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.840 http://cunit.sourceforge.net/ 00:17:34.840 00:17:34.840 00:17:34.840 Suite: nvme_opal 00:17:34.840 Test: test_opal_nvme_security_recv_send_done ...passed 00:17:34.840 Test: test_opal_add_short_atom_header ...passed 00:17:34.840 00:17:34.840 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.840 suites 1 1 n/a 0 0 00:17:34.840 tests 2 2 2 0 0 00:17:34.840 asserts 22 22 22 0 n/a 00:17:34.840 00:17:34.840 Elapsed time = 0.000 seconds 00:17:34.840 [2024-05-15 11:10:53.266900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:17:34.840 ************************************ 00:17:34.840 END TEST unittest_nvme 00:17:34.840 ************************************ 00:17:34.840 00:17:34.840 real 0m0.898s 00:17:34.840 user 0m0.401s 00:17:34.840 sys 0m0.353s 00:17:34.840 11:10:53 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:34.840 11:10:53 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.840 11:10:53 unittest -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:17:34.840 11:10:53 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:34.840 11:10:53 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:34.840 11:10:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:34.840 ************************************ 00:17:34.840 START TEST unittest_log 00:17:34.840 ************************************ 00:17:34.840 11:10:53 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:17:34.840 00:17:34.840 00:17:34.840 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.840 http://cunit.sourceforge.net/ 00:17:34.840 00:17:34.840 00:17:34.840 Suite: log 00:17:34.840 Test: log_test ...passed 00:17:34.840 Test: deprecation ...[2024-05-15 11:10:53.341068] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:17:34.840 [2024-05-15 11:10:53.341248] log_ut.c: 57:log_test: *DEBUG*: log test 00:17:34.840 log dump test: 00:17:34.840 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:17:34.840 spdk dump test: 00:17:34.840 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:17:34.840 spdk dump test: 00:17:34.840 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:17:34.840 00000010 65 20 63 68 61 72 73 e chars 00:17:35.775 passed 00:17:35.775 00:17:35.775 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.775 suites 1 1 n/a 0 0 00:17:35.775 tests 2 2 2 0 0 00:17:35.775 asserts 73 73 73 0 n/a 00:17:35.775 00:17:35.775 Elapsed time = 0.000 seconds 00:17:35.775 00:17:35.775 real 0m1.030s 00:17:35.775 user 0m0.013s 00:17:35.775 sys 0m0.018s 00:17:35.775 11:10:54 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.775 11:10:54 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:17:35.775 ************************************ 00:17:35.775 END TEST unittest_log 00:17:35.775 ************************************ 00:17:35.775 11:10:54 unittest -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:17:35.775 11:10:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:35.775 11:10:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:35.775 11:10:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:35.775 ************************************ 00:17:35.775 START TEST unittest_lvol 00:17:35.775 ************************************ 00:17:35.775 11:10:54 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:17:36.038 00:17:36.038 00:17:36.038 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.038 http://cunit.sourceforge.net/ 00:17:36.038 00:17:36.038 00:17:36.038 Suite: lvol 00:17:36.038 Test: lvs_init_unload_success ...passed 00:17:36.038 Test: lvs_init_destroy_success ...passed 00:17:36.038 Test: lvs_init_opts_success ...passed 00:17:36.038 Test: lvs_unload_lvs_is_null_fail ...passed 00:17:36.038 Test: lvs_names ...passed 00:17:36.038 Test: lvol_create_destroy_success ...[2024-05-15 11:10:54.415563] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:17:36.038 [2024-05-15 11:10:54.415918] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:17:36.038 [2024-05-15 11:10:54.416054] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:17:36.038 [2024-05-15 11:10:54.416107] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:17:36.038 [2024-05-15 11:10:54.416142] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:17:36.038 [2024-05-15 11:10:54.416255] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:17:36.038 passed 00:17:36.038 Test: lvol_create_fail ...[2024-05-15 11:10:54.416571] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:17:36.038 [2024-05-15 11:10:54.416669] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:17:36.038 passed 00:17:36.038 Test: lvol_destroy_fail ...passed 00:17:36.038 Test: lvol_close ...passed 00:17:36.038 Test: lvol_resize ...passed 00:17:36.038 Test: lvol_set_read_only ...passed 00:17:36.038 Test: test_lvs_load ...passed 00:17:36.038 Test: lvols_load ...passed 00:17:36.038 Test: lvol_open ...passed 00:17:36.038 Test: lvol_snapshot ...passed 00:17:36.038 Test: lvol_snapshot_fail ...passed 00:17:36.038 Test: lvol_clone ...passed 00:17:36.038 Test: lvol_clone_fail ...passed 00:17:36.038 Test: lvol_iter_clones ...passed 00:17:36.038 Test: lvol_refcnt ...passed 00:17:36.038 Test: lvol_names ...passed 00:17:36.038 Test: lvol_create_thin_provisioned ...passed 00:17:36.038 Test: lvol_rename ...passed 00:17:36.038 Test: lvs_rename ...passed 00:17:36.038 Test: lvol_inflate ...passed 00:17:36.038 Test: lvol_decouple_parent ...passed 00:17:36.038 Test: lvol_get_xattr ...passed 00:17:36.038 Test: lvol_esnap_reload ...passed 00:17:36.038 Test: lvol_esnap_create_bad_args ...passed 00:17:36.038 Test: lvol_esnap_create_delete ...passed 00:17:36.038 Test: lvol_esnap_load_esnaps ...passed 00:17:36.038 Test: lvol_esnap_missing ...passed 00:17:36.038 Test: lvol_esnap_hotplug ... 00:17:36.038 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:17:36.038 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:17:36.038 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:17:36.038 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:17:36.038 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:17:36.038 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:17:36.038 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:17:36.038 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:17:36.038 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:17:36.038 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:17:36.038 [2024-05-15 11:10:54.417242] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:17:36.038 [2024-05-15 11:10:54.417391] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:17:36.038 [2024-05-15 11:10:54.417442] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:17:36.038 [2024-05-15 11:10:54.417848] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:17:36.038 [2024-05-15 11:10:54.417891] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:17:36.038 [2024-05-15 11:10:54.418015] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:17:36.038 [2024-05-15 11:10:54.418085] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:17:36.038 [2024-05-15 11:10:54.418486] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:17:36.038 [2024-05-15 11:10:54.418915] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:17:36.038 [2024-05-15 11:10:54.419285] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 210ffc90-e42d-483d-9ffe-8d0cce784f5b because it is still open 00:17:36.038 [2024-05-15 11:10:54.419423] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:17:36.038 [2024-05-15 11:10:54.419507] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:17:36.038 [2024-05-15 11:10:54.419658] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:17:36.038 [2024-05-15 11:10:54.419930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:17:36.038 [2024-05-15 11:10:54.420001] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:17:36.038 [2024-05-15 11:10:54.420114] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:17:36.038 [2024-05-15 11:10:54.420261] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:17:36.039 [2024-05-15 11:10:54.420397] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:17:36.039 [2024-05-15 11:10:54.420695] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:17:36.039 [2024-05-15 11:10:54.420724] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:17:36.039 [2024-05-15 11:10:54.420761] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:17:36.039 [2024-05-15 11:10:54.420883] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:17:36.039 [2024-05-15 11:10:54.420997] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:17:36.039 [2024-05-15 11:10:54.421253] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:17:36.039 [2024-05-15 11:10:54.421414] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:17:36.039 [2024-05-15 11:10:54.421457] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:17:36.039 [2024-05-15 11:10:54.421927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d974c7a3-942e-475a-af04-77c061665561: failed to create esnap bs_dev: error -12 00:17:36.039 [2024-05-15 11:10:54.422146] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1a65c2a3-4f0f-4338-a5ec-6309149abbf7: failed to create esnap bs_dev: error -12 00:17:36.039 [2024-05-15 11:10:54.422271] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 0720b62f-f938-49cd-bef0-3ee309239d96: failed to create esnap bs_dev: error -12 00:17:36.039 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:17:36.039 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:17:36.039 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:17:36.039 passed 00:17:36.039 Test: lvol_get_by ...passed 00:17:36.039 Test: lvol_shallow_copy ...passed 00:17:36.039 00:17:36.039 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.039 suites 1 1 n/a 0 0 00:17:36.039 tests 35 35 35 0 0 00:17:36.039 asserts 1459 1459 1459 0 n/a 00:17:36.039 00:17:36.039 Elapsed time = 0.010 seconds 00:17:36.039 [2024-05-15 11:10:54.423832] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:17:36.039 [2024-05-15 11:10:54.423877] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 176d5cc4-0e1d-4f8b-9965-cc06f74cb96c shallow copy, ext_dev must not be NULL 00:17:36.039 ************************************ 00:17:36.039 END TEST unittest_lvol 00:17:36.039 ************************************ 00:17:36.039 00:17:36.039 real 0m0.035s 00:17:36.039 user 0m0.018s 00:17:36.039 sys 0m0.016s 00:17:36.039 11:10:54 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.039 11:10:54 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:36.039 11:10:54 unittest -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:36.039 11:10:54 unittest -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:36.039 ************************************ 00:17:36.039 START TEST unittest_nvme_rdma 00:17:36.039 ************************************ 00:17:36.039 11:10:54 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:17:36.039 00:17:36.039 00:17:36.039 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.039 http://cunit.sourceforge.net/ 00:17:36.039 00:17:36.039 00:17:36.039 Suite: nvme_rdma 00:17:36.039 Test: test_nvme_rdma_build_sgl_request ...passed 00:17:36.039 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:17:36.039 Test: test_nvme_rdma_build_contig_request ...passed 00:17:36.039 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:17:36.039 Test: test_nvme_rdma_create_reqs ...[2024-05-15 11:10:54.494195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:17:36.039 [2024-05-15 11:10:54.494453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:17:36.039 [2024-05-15 11:10:54.494535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:17:36.039 [2024-05-15 11:10:54.494610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:17:36.039 [2024-05-15 11:10:54.494757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:17:36.039 passed 00:17:36.039 Test: test_nvme_rdma_create_rsps ...passed 00:17:36.039 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:17:36.039 Test: test_nvme_rdma_poller_create ...passed 00:17:36.039 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:17:36.039 Test: test_nvme_rdma_ctrlr_construct ...passed 00:17:36.039 Test: test_nvme_rdma_req_put_and_get ...passed 00:17:36.039 Test: test_nvme_rdma_req_init ...passed 00:17:36.039 Test: test_nvme_rdma_validate_cm_event ...passed 00:17:36.039 Test: test_nvme_rdma_qpair_init ...passed 00:17:36.039 Test: test_nvme_rdma_qpair_submit_request ...passed 00:17:36.039 Test: test_nvme_rdma_memory_domain ...passed 00:17:36.039 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:17:36.039 Test: test_rdma_get_memory_translation ...[2024-05-15 11:10:54.495458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:17:36.039 [2024-05-15 11:10:54.495559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:17:36.039 [2024-05-15 11:10:54.495615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:17:36.039 [2024-05-15 11:10:54.495785] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:17:36.039 [2024-05-15 11:10:54.496065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:17:36.039 [2024-05-15 11:10:54.496112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:17:36.039 [2024-05-15 11:10:54.496196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:17:36.039 [2024-05-15 11:10:54.496268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:17:36.039 passed 00:17:36.039 Test: test_get_rdma_qpair_from_wc ...passed 00:17:36.039 Test: test_nvme_rdma_ctrlr_get_max_sges ...[2024-05-15 11:10:54.496332] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:17:36.039 passed 00:17:36.039 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:17:36.039 Test: test_nvme_rdma_qpair_set_poller ...passed 00:17:36.039 00:17:36.039 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.039 suites 1 1 n/a 0 0 00:17:36.039 tests 22 22 22 0 0 00:17:36.039 asserts 412 412 412 0 n/a 00:17:36.039 00:17:36.039 Elapsed time = 0.010 seconds 00:17:36.039 [2024-05-15 11:10:54.496773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:36.039 [2024-05-15 11:10:54.496835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:17:36.039 [2024-05-15 11:10:54.496978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:17:36.039 [2024-05-15 11:10:54.497018] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:17:36.039 [2024-05-15 11:10:54.497042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd7ff12580 on poll group 0x60c000000040 00:17:36.039 [2024-05-15 11:10:54.497096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:17:36.039 [2024-05-15 11:10:54.497128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:17:36.039 [2024-05-15 11:10:54.497151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd7ff12580 on poll group 0x60c000000040 00:17:36.039 [2024-05-15 11:10:54.497194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:17:36.039 ************************************ 00:17:36.039 END TEST unittest_nvme_rdma 00:17:36.039 ************************************ 00:17:36.039 00:17:36.039 real 0m0.026s 00:17:36.039 user 0m0.013s 00:17:36.039 sys 0m0.013s 00:17:36.039 11:10:54 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.039 11:10:54 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:36.039 11:10:54 unittest -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.039 11:10:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:36.039 ************************************ 00:17:36.039 START TEST unittest_nvmf_transport 00:17:36.039 ************************************ 00:17:36.039 11:10:54 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:17:36.039 00:17:36.039 00:17:36.040 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.040 http://cunit.sourceforge.net/ 00:17:36.040 00:17:36.040 00:17:36.040 Suite: nvmf 00:17:36.040 Test: test_spdk_nvmf_transport_create ...passed 00:17:36.040 Test: test_nvmf_transport_poll_group_create ...passed 00:17:36.040 Test: test_spdk_nvmf_transport_opts_init ...passed 00:17:36.040 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:17:36.040 00:17:36.040 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.040 suites 1 1 n/a 0 0 00:17:36.040 tests 4 4 4 0 0 00:17:36.040 asserts 49 49 49 0 n/a 00:17:36.040 00:17:36.040 Elapsed time = 0.000 seconds 00:17:36.040 [2024-05-15 11:10:54.566954] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:17:36.040 [2024-05-15 11:10:54.567196] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:17:36.040 [2024-05-15 11:10:54.567241] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:17:36.040 [2024-05-15 11:10:54.567332] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:17:36.040 [2024-05-15 11:10:54.567449] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:17:36.040 [2024-05-15 11:10:54.567533] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:17:36.040 [2024-05-15 11:10:54.567558] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:17:36.040 ************************************ 00:17:36.040 END TEST unittest_nvmf_transport 00:17:36.040 ************************************ 00:17:36.040 00:17:36.040 real 0m0.025s 00:17:36.040 user 0m0.010s 00:17:36.040 sys 0m0.015s 00:17:36.040 11:10:54 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.040 11:10:54 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:17:36.040 11:10:54 unittest -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:17:36.040 11:10:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.040 11:10:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.040 11:10:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:36.040 ************************************ 00:17:36.040 START TEST unittest_rdma 00:17:36.040 ************************************ 00:17:36.040 11:10:54 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:17:36.040 00:17:36.040 00:17:36.040 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.040 http://cunit.sourceforge.net/ 00:17:36.040 00:17:36.040 00:17:36.040 Suite: rdma_common 00:17:36.040 Test: test_spdk_rdma_pd ...passed 00:17:36.040 00:17:36.040 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.040 suites 1 1 n/a 0 0 00:17:36.040 tests 1 1 1 0 0 00:17:36.040 asserts 31 31 31 0 n/a 00:17:36.040 00:17:36.040 Elapsed time = 0.000 seconds 00:17:36.040 [2024-05-15 11:10:54.641196] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:17:36.040 [2024-05-15 11:10:54.641468] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:17:36.040 ************************************ 00:17:36.040 END TEST unittest_rdma 00:17:36.040 ************************************ 00:17:36.040 00:17:36.040 real 0m0.028s 00:17:36.040 user 0m0.011s 00:17:36.040 sys 0m0.018s 00:17:36.040 11:10:54 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.040 11:10:54 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:36.299 11:10:54 unittest -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:36.299 11:10:54 unittest -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:17:36.299 11:10:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.299 11:10:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.299 11:10:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:36.299 ************************************ 00:17:36.299 START TEST unittest_nvme_cuse 00:17:36.299 ************************************ 00:17:36.299 11:10:54 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:17:36.299 00:17:36.299 00:17:36.299 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.299 http://cunit.sourceforge.net/ 00:17:36.299 00:17:36.299 00:17:36.299 Suite: nvme_cuse 00:17:36.299 Test: test_cuse_nvme_submit_io_read_write ...passed 00:17:36.299 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:17:36.299 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:17:36.299 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:17:36.299 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:17:36.299 Test: test_cuse_nvme_submit_io ...[2024-05-15 11:10:54.718004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:17:36.299 passed 00:17:36.299 Test: test_cuse_nvme_reset ...passed 00:17:36.299 Test: test_nvme_cuse_stop ...[2024-05-15 11:10:54.718613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:17:36.868 passed 00:17:36.868 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:17:36.868 00:17:36.868 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.868 suites 1 1 n/a 0 0 00:17:36.868 tests 9 9 9 0 0 00:17:36.868 asserts 118 118 118 0 n/a 00:17:36.868 00:17:36.868 Elapsed time = 0.500 seconds 00:17:36.868 ************************************ 00:17:36.868 END TEST unittest_nvme_cuse 00:17:36.868 ************************************ 00:17:36.868 00:17:36.868 real 0m0.526s 00:17:36.868 user 0m0.260s 00:17:36.868 sys 0m0.265s 00:17:36.868 11:10:55 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.868 11:10:55 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 11:10:55 unittest -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:17:36.868 11:10:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.868 11:10:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.868 11:10:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 ************************************ 00:17:36.868 START TEST unittest_nvmf 00:17:36.868 ************************************ 00:17:36.868 11:10:55 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:17:36.868 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:17:36.868 00:17:36.868 00:17:36.868 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.868 http://cunit.sourceforge.net/ 00:17:36.868 00:17:36.868 00:17:36.868 Suite: nvmf 00:17:36.868 Test: test_get_log_page ...passed 00:17:36.868 Test: test_process_fabrics_cmd ...passed 00:17:36.868 Test: test_connect ...[2024-05-15 11:10:55.289429] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:17:36.868 [2024-05-15 11:10:55.289696] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:17:36.868 [2024-05-15 11:10:55.290164] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:17:36.868 passed 00:17:36.868 Test: test_get_ns_id_desc_list ...passed 00:17:36.868 Test: test_identify_ns ...[2024-05-15 11:10:55.290479] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:17:36.868 [2024-05-15 11:10:55.290524] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:17:36.868 [2024-05-15 11:10:55.290559] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:17:36.868 [2024-05-15 11:10:55.290646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:17:36.868 [2024-05-15 11:10:55.290694] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:17:36.868 [2024-05-15 11:10:55.290724] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:17:36.868 [2024-05-15 11:10:55.290780] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:17:36.868 [2024-05-15 11:10:55.290861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:17:36.868 [2024-05-15 11:10:55.290924] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:17:36.868 [2024-05-15 11:10:55.291044] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:17:36.868 [2024-05-15 11:10:55.291113] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:17:36.868 [2024-05-15 11:10:55.291155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:17:36.868 [2024-05-15 11:10:55.291205] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 713:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:17:36.868 [2024-05-15 11:10:55.291264] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:17:36.868 [2024-05-15 11:10:55.291355] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:17:36.868 [2024-05-15 11:10:55.291399] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:17:36.868 [2024-05-15 11:10:55.291585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:36.868 [2024-05-15 11:10:55.291784] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:17:36.868 passed 00:17:36.868 Test: test_identify_ns_iocs_specific ...[2024-05-15 11:10:55.291882] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:36.868 [2024-05-15 11:10:55.291992] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:36.868 passed 00:17:36.868 Test: test_reservation_write_exclusive ...passed 00:17:36.868 Test: test_reservation_exclusive_access ...passed 00:17:36.868 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:17:36.868 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:17:36.868 Test: test_reservation_notification_log_page ...[2024-05-15 11:10:55.292168] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:36.868 passed 00:17:36.868 Test: test_get_dif_ctx ...passed 00:17:36.868 Test: test_set_get_features ...passed 00:17:36.868 Test: test_identify_ctrlr ...passed 00:17:36.868 Test: test_identify_ctrlr_iocs_specific ...[2024-05-15 11:10:55.292743] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:17:36.868 [2024-05-15 11:10:55.292821] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:17:36.868 [2024-05-15 11:10:55.292858] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:17:36.868 [2024-05-15 11:10:55.292887] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:17:36.868 passed 00:17:36.868 Test: test_custom_admin_cmd ...passed 00:17:36.868 Test: test_fused_compare_and_write ...passed 00:17:36.868 Test: test_multi_async_event_reqs ...passed 00:17:36.868 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:17:36.868 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:17:36.868 Test: test_multi_async_events ...passed 00:17:36.868 Test: test_rae ...passed 00:17:36.868 Test: test_nvmf_ctrlr_create_destruct ...passed 00:17:36.868 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:17:36.868 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:17:36.868 Test: test_zcopy_read ...passed 00:17:36.868 Test: test_zcopy_write ...passed 00:17:36.868 Test: test_nvmf_property_set ...passed 00:17:36.868 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-05-15 11:10:55.293208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:17:36.869 [2024-05-15 11:10:55.293241] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:17:36.869 [2024-05-15 11:10:55.293279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:17:36.869 [2024-05-15 11:10:55.293653] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:17:36.869 [2024-05-15 11:10:55.293700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:17:36.869 [2024-05-15 11:10:55.293868] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:17:36.869 passed 00:17:36.869 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:17:36.869 Test: test_nvmf_ctrlr_ns_attachment ...[2024-05-15 11:10:55.293908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:17:36.869 [2024-05-15 11:10:55.293959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:17:36.869 [2024-05-15 11:10:55.293991] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:17:36.869 [2024-05-15 11:10:55.294039] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:17:36.869 passed 00:17:36.869 Test: test_nvmf_check_qpair_active ...passed 00:17:36.869 00:17:36.869 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.869 suites 1 1 n/a 0 0 00:17:36.869 tests 32 32 32 0 0 00:17:36.869 asserts 977 977 977 0 n/a 00:17:36.869 00:17:36.869 Elapsed time = 0.010 seconds 00:17:36.869 [2024-05-15 11:10:55.294139] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:17:36.869 [2024-05-15 11:10:55.294176] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4691:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:17:36.869 [2024-05-15 11:10:55.294206] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:17:36.869 [2024-05-15 11:10:55.294239] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:17:36.869 [2024-05-15 11:10:55.294262] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:17:36.869 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:17:36.869 00:17:36.869 00:17:36.869 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.869 http://cunit.sourceforge.net/ 00:17:36.869 00:17:36.869 00:17:36.869 Suite: nvmf 00:17:36.869 Test: test_get_rw_params ...passed 00:17:36.869 Test: test_get_rw_ext_params ...passed 00:17:36.869 Test: test_lba_in_range ...passed 00:17:36.869 Test: test_get_dif_ctx ...passed 00:17:36.869 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:17:36.869 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:17:36.869 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:17:36.869 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:17:36.869 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:17:36.869 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:17:36.869 00:17:36.869 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.869 suites 1 1 n/a 0 0 00:17:36.869 tests 10 10 10 0 0 00:17:36.869 asserts 159 159 159 0 n/a 00:17:36.869 00:17:36.869 Elapsed time = 0.000 seconds 00:17:36.869 [2024-05-15 11:10:55.319024] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:17:36.869 [2024-05-15 11:10:55.319264] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:17:36.869 [2024-05-15 11:10:55.319338] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:17:36.869 [2024-05-15 11:10:55.319400] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:17:36.869 [2024-05-15 11:10:55.319475] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:17:36.869 [2024-05-15 11:10:55.319566] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:17:36.869 [2024-05-15 11:10:55.319593] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:17:36.869 [2024-05-15 11:10:55.319637] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:17:36.869 [2024-05-15 11:10:55.319667] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:17:36.869 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:17:36.869 00:17:36.869 00:17:36.869 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.869 http://cunit.sourceforge.net/ 00:17:36.869 00:17:36.869 00:17:36.869 Suite: nvmf 00:17:36.869 Test: test_discovery_log ...passed 00:17:36.869 Test: test_discovery_log_with_filters ...passed 00:17:36.869 00:17:36.869 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.869 suites 1 1 n/a 0 0 00:17:36.869 tests 2 2 2 0 0 00:17:36.869 asserts 238 238 238 0 n/a 00:17:36.869 00:17:36.869 Elapsed time = 0.000 seconds 00:17:36.869 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:17:36.869 00:17:36.869 00:17:36.869 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.869 http://cunit.sourceforge.net/ 00:17:36.869 00:17:36.869 00:17:36.869 Suite: nvmf 00:17:36.869 Test: nvmf_test_create_subsystem ...[2024-05-15 11:10:55.373089] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:17:36.869 [2024-05-15 11:10:55.373387] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:17:36.869 [2024-05-15 11:10:55.373550] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:17:36.869 [2024-05-15 11:10:55.373678] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:17:36.869 [2024-05-15 11:10:55.373737] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:17:36.869 [2024-05-15 11:10:55.373784] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:17:36.869 [2024-05-15 11:10:55.373920] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:17:36.869 [2024-05-15 11:10:55.374008] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:17:36.869 [2024-05-15 11:10:55.374055] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:17:36.869 [2024-05-15 11:10:55.374104] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:17:36.869 [2024-05-15 11:10:55.374174] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:17:36.869 [2024-05-15 11:10:55.374233] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:17:36.869 [2024-05-15 11:10:55.374344] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:17:36.869 passed 00:17:36.869 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:17:36.869 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:17:36.869 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:17:36.869 Test: test_spdk_nvmf_ns_visible ...[2024-05-15 11:10:55.374490] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:17:36.869 [2024-05-15 11:10:55.374601] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:17:36.869 [2024-05-15 11:10:55.374663] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:17:36.869 [2024-05-15 11:10:55.374784] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:17:36.869 [2024-05-15 11:10:55.374883] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:17:36.869 [2024-05-15 11:10:55.374927] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:17:36.869 [2024-05-15 11:10:55.375008] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:17:36.869 [2024-05-15 11:10:55.375066] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:17:36.869 [2024-05-15 11:10:55.375107] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:17:36.869 [2024-05-15 11:10:55.375327] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:17:36.869 [2024-05-15 11:10:55.375397] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1962:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:17:36.869 [2024-05-15 11:10:55.375639] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2090:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:17:36.869 passed 00:17:36.869 Test: test_reservation_register ...passed 00:17:36.869 Test: test_reservation_register_with_ptpl ...passed 00:17:36.869 Test: test_reservation_acquire_preempt_1 ...passed 00:17:36.870 Test: test_reservation_acquire_release_with_ptpl ...[2024-05-15 11:10:55.375802] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:17:36.870 [2024-05-15 11:10:55.376574] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.376711] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3087:nvmf_ns_reservation_register: *ERROR*: No registrant 00:17:36.870 [2024-05-15 11:10:55.378429] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 passed 00:17:36.870 Test: test_reservation_release ...passed 00:17:36.870 Test: test_reservation_unregister_notification ...passed 00:17:36.870 Test: test_reservation_release_notification ...passed 00:17:36.870 Test: test_reservation_release_notification_write_exclusive ...passed 00:17:36.870 Test: test_reservation_clear_notification ...passed 00:17:36.870 Test: test_reservation_preempt_notification ...passed 00:17:36.870 Test: test_spdk_nvmf_ns_event ...passed 00:17:36.870 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:17:36.870 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:17:36.870 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:17:36.870 Test: test_nvmf_ns_reservation_report ...passed 00:17:36.870 Test: test_nvmf_nqn_is_valid ...passed 00:17:36.870 Test: test_nvmf_ns_reservation_restore ...passed 00:17:36.870 Test: test_nvmf_subsystem_state_change ...passed 00:17:36.870 Test: test_nvmf_reservation_custom_ops ...passed 00:17:36.870 00:17:36.870 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.870 suites 1 1 n/a 0 0 00:17:36.870 tests 24 24 24 0 0 00:17:36.870 asserts 499 499 499 0 n/a 00:17:36.870 00:17:36.870 Elapsed time = 0.010 seconds 00:17:36.870 [2024-05-15 11:10:55.380392] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.380599] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.380783] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.381001] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.381240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.381479] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3029:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:17:36.870 [2024-05-15 11:10:55.382115] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:17:36.870 [2024-05-15 11:10:55.382221] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:17:36.870 [2024-05-15 11:10:55.382376] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3392:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:17:36.870 [2024-05-15 11:10:55.382480] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:17:36.870 [2024-05-15 11:10:55.382559] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:3f284030-843d-4bd0-b5ad-4775a1410d9": uuid is not the correct length 00:17:36.870 [2024-05-15 11:10:55.382619] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:17:36.870 [2024-05-15 11:10:55.382864] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2586:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:17:36.870 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:17:36.870 00:17:36.870 00:17:36.870 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.870 http://cunit.sourceforge.net/ 00:17:36.870 00:17:36.870 00:17:36.870 Suite: nvmf 00:17:36.870 Test: test_nvmf_tcp_create ...passed 00:17:36.870 Test: test_nvmf_tcp_destroy ...[2024-05-15 11:10:55.423474] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:17:36.870 passed 00:17:36.870 Test: test_nvmf_tcp_poll_group_create ...passed 00:17:36.870 Test: test_nvmf_tcp_send_c2h_data ...passed 00:17:36.870 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:17:36.870 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:17:37.130 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:17:37.130 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:17:37.130 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:17:37.130 Test: test_nvmf_tcp_icreq_handle ...[2024-05-15 11:10:55.527389] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.130 [2024-05-15 11:10:55.527484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.130 [2024-05-15 11:10:55.527567] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.130 [2024-05-15 11:10:55.527605] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.130 [2024-05-15 11:10:55.527630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.130 [2024-05-15 11:10:55.527725] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:17:37.130 [2024-05-15 11:10:55.527845] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.130 [2024-05-15 11:10:55.527897] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.130 [2024-05-15 11:10:55.527923] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:17:37.130 [2024-05-15 11:10:55.527961] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.130 [2024-05-15 11:10:55.527985] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.528019] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.528055] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:17:37.131 passed 00:17:37.131 Test: test_nvmf_tcp_check_xfer_type ...passed 00:17:37.131 Test: test_nvmf_tcp_invalid_sgl ...[2024-05-15 11:10:55.528103] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.528574] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2508:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:17:37.131 [2024-05-15 11:10:55.528623] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.528650] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebbc40 is same with the state(5) to be set 00:17:37.131 passed 00:17:37.131 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-15 11:10:55.528744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe0eebc9a0 00:17:37.131 [2024-05-15 11:10:55.528936] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.528992] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 passed 00:17:37.131 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-15 11:10:55.529387] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2297:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe0eebc100 00:17:37.131 [2024-05-15 11:10:55.529432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529511] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:17:37.131 [2024-05-15 11:10:55.529541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529623] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:17:37.131 [2024-05-15 11:10:55.529658] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529707] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529750] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529786] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529871] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529899] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529936] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.529961] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.529991] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.530021] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.530076] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.530105] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 [2024-05-15 11:10:55.530136] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:17:37.131 [2024-05-15 11:10:55.530159] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0eebc100 is same with the state(5) to be set 00:17:37.131 passed 00:17:37.131 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:17:37.131 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:17:37.131 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:17:37.131 00:17:37.131 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.131 suites 1 1 n/a 0 0 00:17:37.131 tests 17 17 17 0 0 00:17:37.131 asserts 222 222 222 0 n/a 00:17:37.131 00:17:37.131 Elapsed time = 0.150 seconds 00:17:37.131 [2024-05-15 11:10:55.550080] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:17:37.131 [2024-05-15 11:10:55.550147] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:17:37.131 [2024-05-15 11:10:55.550367] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:17:37.131 [2024-05-15 11:10:55.550404] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:17:37.131 [2024-05-15 11:10:55.550520] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:17:37.131 [2024-05-15 11:10:55.550548] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:17:37.131 11:10:55 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:17:37.131 00:17:37.131 00:17:37.131 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.131 http://cunit.sourceforge.net/ 00:17:37.131 00:17:37.131 00:17:37.131 Suite: nvmf 00:17:37.131 Test: test_nvmf_tgt_create_poll_group ...passed 00:17:37.131 00:17:37.131 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.131 suites 1 1 n/a 0 0 00:17:37.131 tests 1 1 1 0 0 00:17:37.131 asserts 17 17 17 0 n/a 00:17:37.131 00:17:37.131 Elapsed time = 0.010 seconds 00:17:37.131 00:17:37.131 real 0m0.420s 00:17:37.131 user 0m0.180s 00:17:37.131 sys 0m0.240s 00:17:37.131 11:10:55 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.131 11:10:55 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:17:37.131 ************************************ 00:17:37.131 END TEST unittest_nvmf 00:17:37.131 ************************************ 00:17:37.131 11:10:55 unittest -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:37.131 11:10:55 unittest -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:37.131 11:10:55 unittest -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:17:37.131 11:10:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.131 11:10:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.131 11:10:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.131 ************************************ 00:17:37.131 START TEST unittest_nvmf_rdma 00:17:37.131 ************************************ 00:17:37.131 11:10:55 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:17:37.131 00:17:37.131 00:17:37.131 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.131 http://cunit.sourceforge.net/ 00:17:37.131 00:17:37.131 00:17:37.131 Suite: nvmf 00:17:37.131 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-15 11:10:55.763197] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1860:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:17:37.131 [2024-05-15 11:10:55.763498] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1910:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:17:37.131 [2024-05-15 11:10:55.763550] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1910:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:17:37.131 passed 00:17:37.131 Test: test_spdk_nvmf_rdma_request_process ...passed 00:17:37.131 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:17:37.131 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:17:37.131 Test: test_nvmf_rdma_opts_init ...passed 00:17:37.131 Test: test_nvmf_rdma_request_free_data ...passed 00:17:37.131 Test: test_nvmf_rdma_resources_create ...passed 00:17:37.131 Test: test_nvmf_rdma_qpair_compare ...passed 00:17:37.131 Test: test_nvmf_rdma_resize_cq ...[2024-05-15 11:10:55.765315] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 949:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:17:37.131 Using CQ of insufficient size may lead to CQ overrun 00:17:37.131 [2024-05-15 11:10:55.765429] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:17:37.131 [2024-05-15 11:10:55.765509] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:17:37.391 passed 00:17:37.391 00:17:37.391 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.391 suites 1 1 n/a 0 0 00:17:37.391 tests 9 9 9 0 0 00:17:37.391 asserts 579 579 579 0 n/a 00:17:37.391 00:17:37.391 Elapsed time = 0.000 seconds 00:17:37.391 00:17:37.391 real 0m0.032s 00:17:37.391 user 0m0.016s 00:17:37.391 sys 0m0.017s 00:17:37.391 11:10:55 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.391 11:10:55 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 ************************************ 00:17:37.391 END TEST unittest_nvmf_rdma 00:17:37.391 ************************************ 00:17:37.391 11:10:55 unittest -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:37.391 11:10:55 unittest -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:17:37.391 11:10:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.391 11:10:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.391 11:10:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 ************************************ 00:17:37.391 START TEST unittest_scsi 00:17:37.391 ************************************ 00:17:37.391 11:10:55 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:17:37.391 11:10:55 unittest.unittest_scsi -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:17:37.391 00:17:37.391 00:17:37.391 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.391 http://cunit.sourceforge.net/ 00:17:37.391 00:17:37.391 00:17:37.391 Suite: dev_suite 00:17:37.391 Test: dev_destruct_null_dev ...passed 00:17:37.391 Test: dev_destruct_zero_luns ...passed 00:17:37.391 Test: dev_destruct_null_lun ...passed 00:17:37.391 Test: dev_destruct_success ...passed 00:17:37.391 Test: dev_construct_num_luns_zero ...passed 00:17:37.391 Test: dev_construct_no_lun_zero ...passed 00:17:37.391 Test: dev_construct_null_lun ...passed 00:17:37.391 Test: dev_construct_name_too_long ...[2024-05-15 11:10:55.846193] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:17:37.391 [2024-05-15 11:10:55.846476] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:17:37.391 [2024-05-15 11:10:55.846514] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:17:37.391 [2024-05-15 11:10:55.846556] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:17:37.391 passed 00:17:37.391 Test: dev_construct_success ...passed 00:17:37.391 Test: dev_construct_success_lun_zero_not_first ...passed 00:17:37.391 Test: dev_queue_mgmt_task_success ...passed 00:17:37.391 Test: dev_queue_task_success ...passed 00:17:37.391 Test: dev_stop_success ...passed 00:17:37.391 Test: dev_add_port_max_ports ...passed 00:17:37.391 Test: dev_add_port_construct_failure1 ...[2024-05-15 11:10:55.846763] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:17:37.391 [2024-05-15 11:10:55.846874] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:17:37.391 passed 00:17:37.391 Test: dev_add_port_construct_failure2 ...passed 00:17:37.391 Test: dev_add_port_success1 ...passed 00:17:37.391 Test: dev_add_port_success2 ...passed 00:17:37.391 Test: dev_add_port_success3 ...passed 00:17:37.391 Test: dev_find_port_by_id_num_ports_zero ...passed 00:17:37.391 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:17:37.391 Test: dev_find_port_by_id_success ...passed 00:17:37.391 Test: dev_add_lun_bdev_not_found ...passed 00:17:37.391 Test: dev_add_lun_no_free_lun_id ...[2024-05-15 11:10:55.846980] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:17:37.391 passed 00:17:37.391 Test: dev_add_lun_success1 ...passed 00:17:37.391 Test: dev_add_lun_success2 ...passed 00:17:37.391 Test: dev_check_pending_tasks ...passed 00:17:37.391 Test: dev_iterate_luns ...[2024-05-15 11:10:55.847345] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:17:37.391 passed 00:17:37.391 Test: dev_find_free_lun ...passed 00:17:37.391 00:17:37.391 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.391 suites 1 1 n/a 0 0 00:17:37.391 tests 29 29 29 0 0 00:17:37.391 asserts 97 97 97 0 n/a 00:17:37.391 00:17:37.391 Elapsed time = 0.010 seconds 00:17:37.391 11:10:55 unittest.unittest_scsi -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:17:37.391 00:17:37.391 00:17:37.391 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.391 http://cunit.sourceforge.net/ 00:17:37.391 00:17:37.391 00:17:37.391 Suite: lun_suite 00:17:37.392 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:17:37.392 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:17:37.392 Test: lun_task_mgmt_execute_lun_reset ...passed 00:17:37.392 Test: lun_task_mgmt_execute_target_reset ...passed 00:17:37.392 Test: lun_task_mgmt_execute_invalid_case ...passed 00:17:37.392 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:17:37.392 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:17:37.392 Test: lun_append_task_null_lun_not_supported ...passed 00:17:37.392 Test: lun_execute_scsi_task_pending ...[2024-05-15 11:10:55.873479] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:17:37.392 [2024-05-15 11:10:55.873761] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:17:37.392 [2024-05-15 11:10:55.873888] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:17:37.392 passed 00:17:37.392 Test: lun_execute_scsi_task_complete ...passed 00:17:37.392 Test: lun_execute_scsi_task_resize ...passed 00:17:37.392 Test: lun_destruct_success ...passed 00:17:37.392 Test: lun_construct_null_ctx ...passed 00:17:37.392 Test: lun_construct_success ...passed 00:17:37.392 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:17:37.392 Test: lun_reset_task_suspend_scsi_task ...passed 00:17:37.392 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:17:37.392 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:17:37.392 00:17:37.392 [2024-05-15 11:10:55.874026] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:17:37.392 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.392 suites 1 1 n/a 0 0 00:17:37.392 tests 18 18 18 0 0 00:17:37.392 asserts 153 153 153 0 n/a 00:17:37.392 00:17:37.392 Elapsed time = 0.000 seconds 00:17:37.392 11:10:55 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:17:37.392 00:17:37.392 00:17:37.392 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.392 http://cunit.sourceforge.net/ 00:17:37.392 00:17:37.392 00:17:37.392 Suite: scsi_suite 00:17:37.392 Test: scsi_init ...passed 00:17:37.392 00:17:37.392 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.392 suites 1 1 n/a 0 0 00:17:37.392 tests 1 1 1 0 0 00:17:37.392 asserts 1 1 1 0 n/a 00:17:37.392 00:17:37.392 Elapsed time = 0.000 seconds 00:17:37.392 11:10:55 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:17:37.392 00:17:37.392 00:17:37.392 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.392 http://cunit.sourceforge.net/ 00:17:37.392 00:17:37.392 00:17:37.392 Suite: translation_suite 00:17:37.392 Test: mode_select_6_test ...passed 00:17:37.392 Test: mode_select_6_test2 ...passed 00:17:37.392 Test: mode_sense_6_test ...passed 00:17:37.392 Test: mode_sense_10_test ...passed 00:17:37.392 Test: inquiry_evpd_test ...passed 00:17:37.392 Test: inquiry_standard_test ...passed 00:17:37.392 Test: inquiry_overflow_test ...passed 00:17:37.392 Test: task_complete_test ...passed 00:17:37.392 Test: lba_range_test ...passed 00:17:37.392 Test: xfer_len_test ...passed 00:17:37.392 Test: xfer_test ...passed 00:17:37.392 Test: scsi_name_padding_test ...passed 00:17:37.392 Test: get_dif_ctx_test ...passed 00:17:37.392 Test: unmap_split_test ...[2024-05-15 11:10:55.919856] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:17:37.392 passed 00:17:37.392 00:17:37.392 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.392 suites 1 1 n/a 0 0 00:17:37.392 tests 14 14 14 0 0 00:17:37.392 asserts 1205 1205 1205 0 n/a 00:17:37.392 00:17:37.392 Elapsed time = 0.000 seconds 00:17:37.392 11:10:55 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:17:37.392 00:17:37.392 00:17:37.392 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.392 http://cunit.sourceforge.net/ 00:17:37.392 00:17:37.392 00:17:37.392 Suite: reservation_suite 00:17:37.392 Test: test_reservation_register ...passed 00:17:37.392 Test: test_reservation_reserve ...passed 00:17:37.392 Test: test_reservation_preempt_non_all_regs ...[2024-05-15 11:10:55.942399] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 [2024-05-15 11:10:55.942635] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 [2024-05-15 11:10:55.942696] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:17:37.392 [2024-05-15 11:10:55.942795] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:17:37.392 [2024-05-15 11:10:55.942877] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 passed 00:17:37.392 Test: test_reservation_preempt_all_regs ...passed 00:17:37.392 Test: test_reservation_cmds_conflict ...[2024-05-15 11:10:55.942926] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:17:37.392 [2024-05-15 11:10:55.943017] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 [2024-05-15 11:10:55.943092] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 passed 00:17:37.392 Test: test_scsi2_reserve_release ...passed 00:17:37.392 Test: test_pr_with_scsi2_reserve_release ...passed 00:17:37.392 00:17:37.392 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.392 suites 1 1 n/a 0 0 00:17:37.392 tests 7 7 7 0 0 00:17:37.392 asserts 257 257 257 0 n/a 00:17:37.392 00:17:37.392 Elapsed time = 0.000 seconds 00:17:37.392 [2024-05-15 11:10:55.943143] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:17:37.392 [2024-05-15 11:10:55.943187] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:17:37.392 [2024-05-15 11:10:55.943211] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:17:37.392 [2024-05-15 11:10:55.943238] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:17:37.392 [2024-05-15 11:10:55.943259] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:17:37.392 [2024-05-15 11:10:55.943331] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:17:37.392 ************************************ 00:17:37.392 END TEST unittest_scsi 00:17:37.392 ************************************ 00:17:37.392 00:17:37.392 real 0m0.125s 00:17:37.392 user 0m0.063s 00:17:37.392 sys 0m0.064s 00:17:37.392 11:10:55 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.392 11:10:55 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:17:37.392 11:10:55 unittest -- unit/unittest.sh@276 -- # uname -s 00:17:37.392 11:10:55 unittest -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:17:37.392 11:10:55 unittest -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:17:37.392 11:10:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.392 11:10:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.392 11:10:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.392 ************************************ 00:17:37.392 START TEST unittest_sock 00:17:37.392 ************************************ 00:17:37.392 11:10:55 unittest.unittest_sock -- common/autotest_common.sh@1121 -- # unittest_sock 00:17:37.392 11:10:55 unittest.unittest_sock -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:17:37.392 00:17:37.392 00:17:37.392 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.392 http://cunit.sourceforge.net/ 00:17:37.392 00:17:37.392 00:17:37.392 Suite: sock 00:17:37.652 Test: posix_sock ...passed 00:17:37.652 Test: ut_sock ...passed 00:17:37.652 Test: posix_sock_group ...passed 00:17:37.652 Test: ut_sock_group ...passed 00:17:37.652 Test: posix_sock_group_fairness ...passed 00:17:37.652 Test: _posix_sock_close ...passed 00:17:37.652 Test: sock_get_default_opts ...passed 00:17:37.652 Test: ut_sock_impl_get_set_opts ...passed 00:17:37.652 Test: posix_sock_impl_get_set_opts ...passed 00:17:37.652 Test: ut_sock_map ...passed 00:17:37.652 Test: override_impl_opts ...passed 00:17:37.652 Test: ut_sock_group_get_ctx ...passed 00:17:37.652 00:17:37.652 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.652 suites 1 1 n/a 0 0 00:17:37.652 tests 12 12 12 0 0 00:17:37.652 asserts 349 349 349 0 n/a 00:17:37.652 00:17:37.652 Elapsed time = 0.010 seconds 00:17:37.652 11:10:56 unittest.unittest_sock -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:17:37.652 00:17:37.652 00:17:37.652 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.652 http://cunit.sourceforge.net/ 00:17:37.652 00:17:37.652 00:17:37.652 Suite: posix 00:17:37.652 Test: flush ...passed 00:17:37.652 00:17:37.652 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.652 suites 1 1 n/a 0 0 00:17:37.652 tests 1 1 1 0 0 00:17:37.652 asserts 28 28 28 0 n/a 00:17:37.652 00:17:37.652 Elapsed time = 0.000 seconds 00:17:37.652 11:10:56 unittest.unittest_sock -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:37.652 ************************************ 00:17:37.652 END TEST unittest_sock 00:17:37.652 ************************************ 00:17:37.652 00:17:37.652 real 0m0.083s 00:17:37.652 user 0m0.030s 00:17:37.652 sys 0m0.030s 00:17:37.652 11:10:56 unittest.unittest_sock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.652 11:10:56 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:17:37.652 11:10:56 unittest -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.652 ************************************ 00:17:37.652 START TEST unittest_thread 00:17:37.652 ************************************ 00:17:37.652 11:10:56 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:17:37.652 00:17:37.652 00:17:37.652 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.652 http://cunit.sourceforge.net/ 00:17:37.652 00:17:37.652 00:17:37.652 Suite: io_channel 00:17:37.652 Test: thread_alloc ...passed 00:17:37.652 Test: thread_send_msg ...passed 00:17:37.652 Test: thread_poller ...passed 00:17:37.652 Test: poller_pause ...passed 00:17:37.652 Test: thread_for_each ...passed 00:17:37.652 Test: for_each_channel_remove ...passed 00:17:37.652 Test: for_each_channel_unreg ...passed 00:17:37.652 Test: thread_name ...[2024-05-15 11:10:56.158343] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x7ffcdadf8170 already registered (old:0x613000000200 new:0x6130000003c0) 00:17:37.652 passed 00:17:37.652 Test: channel ...passed 00:17:37.652 Test: channel_destroy_races ...[2024-05-15 11:10:56.160904] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x492120 00:17:37.652 passed 00:17:37.652 Test: thread_exit_test ...[2024-05-15 11:10:56.164273] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 635:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:17:37.652 passed 00:17:37.652 Test: thread_update_stats_test ...passed 00:17:37.652 Test: nested_channel ...passed 00:17:37.652 Test: device_unregister_and_thread_exit_race ...passed 00:17:37.652 Test: cache_closest_timed_poller ...passed 00:17:37.652 Test: multi_timed_pollers_have_same_expiration ...passed 00:17:37.652 Test: io_device_lookup ...passed 00:17:37.652 Test: spdk_spin ...passed 00:17:37.652 Test: for_each_channel_and_thread_exit_race ...[2024-05-15 11:10:56.170750] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:17:37.652 [2024-05-15 11:10:56.170819] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcdadf8150 00:17:37.652 [2024-05-15 11:10:56.170904] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:17:37.652 [2024-05-15 11:10:56.172036] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:17:37.652 [2024-05-15 11:10:56.172085] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcdadf8150 00:17:37.652 [2024-05-15 11:10:56.172116] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:17:37.652 [2024-05-15 11:10:56.172145] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcdadf8150 00:17:37.652 [2024-05-15 11:10:56.172168] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:17:37.652 [2024-05-15 11:10:56.172197] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcdadf8150 00:17:37.652 [2024-05-15 11:10:56.172221] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:17:37.652 [2024-05-15 11:10:56.172260] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcdadf8150 00:17:37.652 passed 00:17:37.652 Test: for_each_thread_and_thread_exit_race ...passed 00:17:37.652 00:17:37.652 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.652 suites 1 1 n/a 0 0 00:17:37.652 tests 20 20 20 0 0 00:17:37.652 asserts 409 409 409 0 n/a 00:17:37.652 00:17:37.652 Elapsed time = 0.040 seconds 00:17:37.652 ************************************ 00:17:37.652 END TEST unittest_thread 00:17:37.652 ************************************ 00:17:37.652 00:17:37.652 real 0m0.066s 00:17:37.652 user 0m0.044s 00:17:37.652 sys 0m0.022s 00:17:37.652 11:10:56 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.652 11:10:56 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:17:37.652 11:10:56 unittest -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.652 11:10:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.652 ************************************ 00:17:37.652 START TEST unittest_iobuf 00:17:37.652 ************************************ 00:17:37.652 11:10:56 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:17:37.652 00:17:37.652 00:17:37.652 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.652 http://cunit.sourceforge.net/ 00:17:37.652 00:17:37.652 00:17:37.652 Suite: io_channel 00:17:37.652 Test: iobuf ...passed 00:17:37.652 Test: iobuf_cache ...passed 00:17:37.652 00:17:37.652 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.652 suites 1 1 n/a 0 0 00:17:37.652 tests 2 2 2 0 0 00:17:37.652 asserts 107 107 107 0 n/a 00:17:37.652 00:17:37.652 Elapsed time = 0.000 seconds 00:17:37.652 [2024-05-15 11:10:56.257715] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:17:37.652 [2024-05-15 11:10:56.257960] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:37.652 [2024-05-15 11:10:56.258058] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:17:37.652 [2024-05-15 11:10:56.258093] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:37.652 [2024-05-15 11:10:56.258133] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:17:37.652 [2024-05-15 11:10:56.258170] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:37.652 ************************************ 00:17:37.652 END TEST unittest_iobuf 00:17:37.652 ************************************ 00:17:37.652 00:17:37.652 real 0m0.030s 00:17:37.652 user 0m0.017s 00:17:37.652 sys 0m0.013s 00:17:37.653 11:10:56 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.653 11:10:56 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:17:37.911 11:10:56 unittest -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:17:37.911 11:10:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:37.912 11:10:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.912 11:10:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:37.912 ************************************ 00:17:37.912 START TEST unittest_util 00:17:37.912 ************************************ 00:17:37.912 11:10:56 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: base64 00:17:37.912 Test: test_base64_get_encoded_strlen ...passed 00:17:37.912 Test: test_base64_get_decoded_len ...passed 00:17:37.912 Test: test_base64_encode ...passed 00:17:37.912 Test: test_base64_decode ...passed 00:17:37.912 Test: test_base64_urlsafe_encode ...passed 00:17:37.912 Test: test_base64_urlsafe_decode ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 6 6 6 0 0 00:17:37.912 asserts 112 112 112 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: bit_array 00:17:37.912 Test: test_1bit ...passed 00:17:37.912 Test: test_64bit ...passed 00:17:37.912 Test: test_find ...passed 00:17:37.912 Test: test_resize ...passed 00:17:37.912 Test: test_errors ...passed 00:17:37.912 Test: test_count ...passed 00:17:37.912 Test: test_mask_store_load ...passed 00:17:37.912 Test: test_mask_clear ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 8 8 8 0 0 00:17:37.912 asserts 5075 5075 5075 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: cpuset 00:17:37.912 Test: test_cpuset ...passed 00:17:37.912 Test: test_cpuset_parse ...[2024-05-15 11:10:56.385996] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:17:37.912 [2024-05-15 11:10:56.386251] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:17:37.912 [2024-05-15 11:10:56.386341] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:17:37.912 passed 00:17:37.912 Test: test_cpuset_fmt ...[2024-05-15 11:10:56.386421] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:17:37.912 [2024-05-15 11:10:56.386447] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:17:37.912 [2024-05-15 11:10:56.386476] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:17:37.912 [2024-05-15 11:10:56.386500] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:17:37.912 [2024-05-15 11:10:56.386543] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:17:37.912 passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 3 3 3 0 0 00:17:37.912 asserts 65 65 65 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.010 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: crc16 00:17:37.912 Test: test_crc16_t10dif ...passed 00:17:37.912 Test: test_crc16_t10dif_seed ...passed 00:17:37.912 Test: test_crc16_t10dif_copy ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 3 3 3 0 0 00:17:37.912 asserts 5 5 5 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: crc32_ieee 00:17:37.912 Test: test_crc32_ieee ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 1 1 1 0 0 00:17:37.912 asserts 1 1 1 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: crc32c 00:17:37.912 Test: test_crc32c ...passed 00:17:37.912 Test: test_crc32c_nvme ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 2 2 2 0 0 00:17:37.912 asserts 16 16 16 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: crc64 00:17:37.912 Test: test_crc64_nvme ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 1 1 1 0 0 00:17:37.912 asserts 4 4 4 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: string 00:17:37.912 Test: test_parse_ip_addr ...passed 00:17:37.912 Test: test_str_chomp ...passed 00:17:37.912 Test: test_parse_capacity ...passed 00:17:37.912 Test: test_sprintf_append_realloc ...passed 00:17:37.912 Test: test_strtol ...passed 00:17:37.912 Test: test_strtoll ...passed 00:17:37.912 Test: test_strarray ...passed 00:17:37.912 Test: test_strcpy_replace ...passed 00:17:37.912 00:17:37.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.912 suites 1 1 n/a 0 0 00:17:37.912 tests 8 8 8 0 0 00:17:37.912 asserts 161 161 161 0 n/a 00:17:37.912 00:17:37.912 Elapsed time = 0.000 seconds 00:17:37.912 11:10:56 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:17:37.912 00:17:37.912 00:17:37.912 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.912 http://cunit.sourceforge.net/ 00:17:37.912 00:17:37.912 00:17:37.912 Suite: dif 00:17:37.912 Test: dif_generate_and_verify_test ...[2024-05-15 11:10:56.535292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:17:37.912 [2024-05-15 11:10:56.535678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:17:37.912 [2024-05-15 11:10:56.536121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:17:37.912 [2024-05-15 11:10:56.536446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:17:37.912 [2024-05-15 11:10:56.536751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:17:37.912 [2024-05-15 11:10:56.537099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:17:37.912 passed 00:17:37.912 Test: dif_disable_check_test ...[2024-05-15 11:10:56.538369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:17:37.912 [2024-05-15 11:10:56.538832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:17:37.912 [2024-05-15 11:10:56.539390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:17:37.912 passed 00:17:37.912 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-15 11:10:56.540859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:17:37.912 [2024-05-15 11:10:56.541276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:17:37.913 [2024-05-15 11:10:56.541933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:17:37.913 [2024-05-15 11:10:56.542410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:17:37.913 [2024-05-15 11:10:56.542960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:17:37.913 [2024-05-15 11:10:56.543323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:17:37.913 [2024-05-15 11:10:56.543849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:17:37.913 [2024-05-15 11:10:56.544240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:17:37.913 [2024-05-15 11:10:56.544737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:17:37.913 [2024-05-15 11:10:56.545138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:17:37.913 [2024-05-15 11:10:56.545679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:17:37.913 passed 00:17:37.913 Test: dif_apptag_mask_test ...[2024-05-15 11:10:56.546395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:17:37.913 [2024-05-15 11:10:56.546783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:17:38.173 passed 00:17:38.173 Test: dif_sec_512_md_0_error_test ...passed 00:17:38.173 Test: dif_sec_4096_md_0_error_test ...[2024-05-15 11:10:56.547220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:17:38.173 [2024-05-15 11:10:56.547317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:17:38.173 [2024-05-15 11:10:56.547483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:17:38.173 passed 00:17:38.173 Test: dif_sec_4100_md_128_error_test ...[2024-05-15 11:10:56.547716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:17:38.173 [2024-05-15 11:10:56.547766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:17:38.173 passed 00:17:38.173 Test: dif_guard_seed_test ...passed 00:17:38.173 Test: dif_guard_value_test ...passed 00:17:38.173 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:17:38.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:17:38.173 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 11:10:56.593209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:17:38.173 [2024-05-15 11:10:56.595793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=be21, Actual=fe21 00:17:38.173 [2024-05-15 11:10:56.598482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.600864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.603126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.173 [2024-05-15 11:10:56.605334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.173 [2024-05-15 11:10:56.607551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=6241 00:17:38.173 [2024-05-15 11:10:56.609280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=bef6 00:17:38.173 [2024-05-15 11:10:56.610562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:17:38.173 [2024-05-15 11:10:56.612242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=78574660, Actual=38574660 00:17:38.173 [2024-05-15 11:10:56.613998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.615601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.617328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.173 [2024-05-15 11:10:56.618971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.173 [2024-05-15 11:10:56.620701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8e63ab1 00:17:38.173 [2024-05-15 11:10:56.621870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=4e8d0653 00:17:38.173 [2024-05-15 11:10:56.623902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.173 [2024-05-15 11:10:56.626335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.173 [2024-05-15 11:10:56.628781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.630994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.632850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.173 [2024-05-15 11:10:56.634566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.173 [2024-05-15 11:10:56.636364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.173 [2024-05-15 11:10:56.637804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.173 passed 00:17:38.173 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-15 11:10:56.638256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.173 [2024-05-15 11:10:56.638525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:17:38.173 [2024-05-15 11:10:56.638777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.638989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.173 [2024-05-15 11:10:56.639467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.173 [2024-05-15 11:10:56.639691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.173 [2024-05-15 11:10:56.640043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.174 [2024-05-15 11:10:56.640273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=bef6 00:17:38.174 [2024-05-15 11:10:56.640598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.174 [2024-05-15 11:10:56.640818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:17:38.174 [2024-05-15 11:10:56.641143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.641345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.641635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.641848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.642158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.174 [2024-05-15 11:10:56.642355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4e8d0653 00:17:38.174 [2024-05-15 11:10:56.642719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.174 [2024-05-15 11:10:56.643119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.174 [2024-05-15 11:10:56.643418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.643784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.644103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.644471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.644778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.174 [2024-05-15 11:10:56.645156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.174 passed 00:17:38.174 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-15 11:10:56.645419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.174 [2024-05-15 11:10:56.645759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:17:38.174 [2024-05-15 11:10:56.646018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.646375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.646679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.646948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.647251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.174 [2024-05-15 11:10:56.647498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=bef6 00:17:38.174 [2024-05-15 11:10:56.647732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.174 [2024-05-15 11:10:56.647991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:17:38.174 [2024-05-15 11:10:56.648270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.648480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.648768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.648993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.649268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.174 [2024-05-15 11:10:56.649465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4e8d0653 00:17:38.174 [2024-05-15 11:10:56.649853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.174 [2024-05-15 11:10:56.650142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.174 [2024-05-15 11:10:56.650498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.650782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.651154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.651443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.651838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.174 [2024-05-15 11:10:56.652124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.174 passed 00:17:38.174 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-15 11:10:56.652455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.174 [2024-05-15 11:10:56.652769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:17:38.174 [2024-05-15 11:10:56.653012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.653320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.653573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.653913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.654145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.174 [2024-05-15 11:10:56.654459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=bef6 00:17:38.174 [2024-05-15 11:10:56.654658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.174 [2024-05-15 11:10:56.654956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:17:38.174 [2024-05-15 11:10:56.655182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.655469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.655672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.655969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.174 [2024-05-15 11:10:56.656167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.174 [2024-05-15 11:10:56.656377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4e8d0653 00:17:38.174 [2024-05-15 11:10:56.656752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.174 [2024-05-15 11:10:56.657117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.174 [2024-05-15 11:10:56.657390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.657769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.658065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.658445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.174 [2024-05-15 11:10:56.658754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.174 [2024-05-15 11:10:56.659129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.174 passed 00:17:38.174 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-15 11:10:56.659450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.174 [2024-05-15 11:10:56.659697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:17:38.174 [2024-05-15 11:10:56.660021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.660262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.660579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.660823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.174 [2024-05-15 11:10:56.661137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.174 [2024-05-15 11:10:56.661375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=bef6 00:17:38.174 passed 00:17:38.174 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-15 11:10:56.661726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.174 [2024-05-15 11:10:56.662032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:17:38.174 [2024-05-15 11:10:56.662260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.174 [2024-05-15 11:10:56.662534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.662760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.175 [2024-05-15 11:10:56.662994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.175 [2024-05-15 11:10:56.663246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.175 [2024-05-15 11:10:56.663512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4e8d0653 00:17:38.175 [2024-05-15 11:10:56.663834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.175 [2024-05-15 11:10:56.664187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.175 [2024-05-15 11:10:56.664474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.664847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.665136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.175 [2024-05-15 11:10:56.665501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.175 [2024-05-15 11:10:56.665803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.175 [2024-05-15 11:10:56.666180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.175 passed 00:17:38.175 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-15 11:10:56.666535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.175 [2024-05-15 11:10:56.666793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:17:38.175 [2024-05-15 11:10:56.667114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.667357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.667696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.175 [2024-05-15 11:10:56.667948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.175 [2024-05-15 11:10:56.668263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.175 [2024-05-15 11:10:56.668497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=bef6 00:17:38.175 passed 00:17:38.175 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-15 11:10:56.668925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.175 [2024-05-15 11:10:56.669161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:17:38.175 [2024-05-15 11:10:56.669427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.669693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.669915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.175 [2024-05-15 11:10:56.670189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.175 [2024-05-15 11:10:56.670420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.175 [2024-05-15 11:10:56.670688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4e8d0653 00:17:38.175 [2024-05-15 11:10:56.671034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.175 [2024-05-15 11:10:56.671372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:17:38.175 [2024-05-15 11:10:56.671666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.672040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.672335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.175 [2024-05-15 11:10:56.672698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.175 [2024-05-15 11:10:56.673016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.175 [2024-05-15 11:10:56.673395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=23d8ef698373aa4 00:17:38.175 passed 00:17:38.175 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:17:38.175 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:17:38.175 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:17:38.175 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 11:10:56.698252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:17:38.175 [2024-05-15 11:10:56.699245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=195c, Actual=595c 00:17:38.175 [2024-05-15 11:10:56.700306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.701292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.702306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.175 [2024-05-15 11:10:56.703221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.175 [2024-05-15 11:10:56.704244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=6241 00:17:38.175 [2024-05-15 11:10:56.705162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=99da 00:17:38.175 [2024-05-15 11:10:56.705957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:17:38.175 [2024-05-15 11:10:56.706672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=da6c3744, Actual=9a6c3744 00:17:38.175 [2024-05-15 11:10:56.707495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.708231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.709013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.175 [2024-05-15 11:10:56.709722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.175 [2024-05-15 11:10:56.710503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8e63ab1 00:17:38.175 [2024-05-15 11:10:56.711293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=7e4a211d 00:17:38.175 [2024-05-15 11:10:56.712636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.175 [2024-05-15 11:10:56.713971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5a6361a5cf7694c3, Actual=1a6361a5cf7694c3 00:17:38.175 [2024-05-15 11:10:56.715340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.716631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.718060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.175 [2024-05-15 11:10:56.719375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.175 [2024-05-15 11:10:56.720794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.175 passed 00:17:38.175 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 11:10:56.722113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=52c73a20b9e6a3fa 00:17:38.175 [2024-05-15 11:10:56.722454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.175 [2024-05-15 11:10:56.722803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:17:38.175 [2024-05-15 11:10:56.723087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.723419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.175 [2024-05-15 11:10:56.723774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.175 [2024-05-15 11:10:56.724060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.175 [2024-05-15 11:10:56.724388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.175 [2024-05-15 11:10:56.724660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=6dc1 00:17:38.175 [2024-05-15 11:10:56.724969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.175 [2024-05-15 11:10:56.725197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:17:38.175 [2024-05-15 11:10:56.725528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.725765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.726047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.176 [2024-05-15 11:10:56.726312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.176 [2024-05-15 11:10:56.726562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.176 [2024-05-15 11:10:56.726818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=9c7c009f 00:17:38.176 [2024-05-15 11:10:56.727271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.176 [2024-05-15 11:10:56.727628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bafef545f04a9f59, Actual=fafef545f04a9f59 00:17:38.176 [2024-05-15 11:10:56.728062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.728430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.728907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.176 [2024-05-15 11:10:56.729268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.176 [2024-05-15 11:10:56.729728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.176 [2024-05-15 11:10:56.730116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b25aaec086daa860 00:17:38.176 passed 00:17:38.176 Test: dix_sec_512_md_0_error ...passed 00:17:38.176 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-05-15 11:10:56.730179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:17:38.176 passed 00:17:38.176 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:17:38.176 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:17:38.176 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:17:38.176 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:17:38.176 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:17:38.176 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:17:38.176 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:17:38.176 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:17:38.176 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 11:10:56.754832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:17:38.176 [2024-05-15 11:10:56.755868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=195c, Actual=595c 00:17:38.176 [2024-05-15 11:10:56.756770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.757798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.758748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.176 [2024-05-15 11:10:56.759777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:17:38.176 [2024-05-15 11:10:56.760674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=6241 00:17:38.176 [2024-05-15 11:10:56.761653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=99da 00:17:38.176 [2024-05-15 11:10:56.762370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:17:38.176 [2024-05-15 11:10:56.763155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=da6c3744, Actual=9a6c3744 00:17:38.176 [2024-05-15 11:10:56.763890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.764664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.765365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.176 [2024-05-15 11:10:56.766137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400000000000005a 00:17:38.176 [2024-05-15 11:10:56.766858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8e63ab1 00:17:38.176 [2024-05-15 11:10:56.767629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=7e4a211d 00:17:38.176 [2024-05-15 11:10:56.768907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.176 [2024-05-15 11:10:56.770222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5a6361a5cf7694c3, Actual=1a6361a5cf7694c3 00:17:38.176 [2024-05-15 11:10:56.771499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.772978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.774226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.176 [2024-05-15 11:10:56.775554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000005a 00:17:38.176 [2024-05-15 11:10:56.776884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.176 passed 00:17:38.176 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 11:10:56.778196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=52c73a20b9e6a3fa 00:17:38.176 [2024-05-15 11:10:56.778448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:17:38.176 [2024-05-15 11:10:56.778900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:17:38.176 [2024-05-15 11:10:56.779178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.779508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.779819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.176 [2024-05-15 11:10:56.780150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:17:38.176 [2024-05-15 11:10:56.780426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6241 00:17:38.176 [2024-05-15 11:10:56.780770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=6dc1 00:17:38.176 [2024-05-15 11:10:56.781021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:17:38.176 [2024-05-15 11:10:56.781317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:17:38.176 [2024-05-15 11:10:56.781563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.781865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.782075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.176 [2024-05-15 11:10:56.782393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:17:38.176 [2024-05-15 11:10:56.782610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e63ab1 00:17:38.176 [2024-05-15 11:10:56.782945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=9c7c009f 00:17:38.176 [2024-05-15 11:10:56.783308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:17:38.176 [2024-05-15 11:10:56.783757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bafef545f04a9f59, Actual=fafef545f04a9f59 00:17:38.176 [2024-05-15 11:10:56.784120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.784568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:17:38.176 [2024-05-15 11:10:56.784924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.176 [2024-05-15 11:10:56.785358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:17:38.176 [2024-05-15 11:10:56.785715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=73256ed43e59a1b1 00:17:38.176 passed 00:17:38.176 Test: set_md_interleave_iovs_test ...[2024-05-15 11:10:56.786189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b25aaec086daa860 00:17:38.176 passed 00:17:38.176 Test: set_md_interleave_iovs_split_test ...passed 00:17:38.176 Test: dif_generate_stream_pi_16_test ...passed 00:17:38.176 Test: dif_generate_stream_test ...passed 00:17:38.176 Test: set_md_interleave_iovs_alignment_test ...passed 00:17:38.176 Test: dif_generate_split_test ...[2024-05-15 11:10:56.791347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:17:38.176 passed 00:17:38.176 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:17:38.176 Test: dif_verify_split_test ...passed 00:17:38.176 Test: dif_verify_stream_multi_segments_test ...passed 00:17:38.176 Test: update_crc32c_pi_16_test ...passed 00:17:38.176 Test: update_crc32c_test ...passed 00:17:38.176 Test: dif_update_crc32c_split_test ...passed 00:17:38.176 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:17:38.176 Test: get_range_with_md_test ...passed 00:17:38.176 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:17:38.435 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:17:38.435 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:17:38.435 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:17:38.435 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:17:38.435 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:17:38.435 Test: dif_generate_and_verify_unmap_test ...passed 00:17:38.435 00:17:38.435 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.435 suites 1 1 n/a 0 0 00:17:38.435 tests 79 79 79 0 0 00:17:38.435 asserts 3584 3584 3584 0 n/a 00:17:38.435 00:17:38.435 Elapsed time = 0.290 seconds 00:17:38.435 11:10:56 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:17:38.435 00:17:38.435 00:17:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.435 http://cunit.sourceforge.net/ 00:17:38.435 00:17:38.435 00:17:38.435 Suite: iov 00:17:38.435 Test: test_single_iov ...passed 00:17:38.435 Test: test_simple_iov ...passed 00:17:38.435 Test: test_complex_iov ...passed 00:17:38.435 Test: test_iovs_to_buf ...passed 00:17:38.435 Test: test_buf_to_iovs ...passed 00:17:38.435 Test: test_memset ...passed 00:17:38.435 Test: test_iov_one ...passed 00:17:38.435 Test: test_iov_xfer ...passed 00:17:38.435 00:17:38.435 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.435 suites 1 1 n/a 0 0 00:17:38.435 tests 8 8 8 0 0 00:17:38.435 asserts 156 156 156 0 n/a 00:17:38.435 00:17:38.435 Elapsed time = 0.000 seconds 00:17:38.435 11:10:56 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:17:38.435 00:17:38.435 00:17:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.435 http://cunit.sourceforge.net/ 00:17:38.435 00:17:38.435 00:17:38.435 Suite: math 00:17:38.435 Test: test_serial_number_arithmetic ...passed 00:17:38.435 Suite: erase 00:17:38.435 Test: test_memset_s ...passed 00:17:38.435 00:17:38.435 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.435 suites 2 2 n/a 0 0 00:17:38.435 tests 2 2 2 0 0 00:17:38.435 asserts 18 18 18 0 n/a 00:17:38.435 00:17:38.435 Elapsed time = 0.000 seconds 00:17:38.435 11:10:56 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:17:38.435 00:17:38.435 00:17:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.435 http://cunit.sourceforge.net/ 00:17:38.435 00:17:38.435 00:17:38.435 Suite: pipe 00:17:38.435 Test: test_create_destroy ...passed 00:17:38.435 Test: test_write_get_buffer ...passed 00:17:38.435 Test: test_write_advance ...passed 00:17:38.435 Test: test_read_get_buffer ...passed 00:17:38.435 Test: test_read_advance ...passed 00:17:38.435 Test: test_data ...passed 00:17:38.435 00:17:38.435 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.435 suites 1 1 n/a 0 0 00:17:38.435 tests 6 6 6 0 0 00:17:38.435 asserts 251 251 251 0 n/a 00:17:38.435 00:17:38.435 Elapsed time = 0.000 seconds 00:17:38.435 11:10:56 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:17:38.435 00:17:38.435 00:17:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.435 http://cunit.sourceforge.net/ 00:17:38.435 00:17:38.435 00:17:38.435 Suite: xor 00:17:38.435 Test: test_xor_gen ...passed 00:17:38.435 00:17:38.435 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.435 suites 1 1 n/a 0 0 00:17:38.435 tests 1 1 1 0 0 00:17:38.435 asserts 17 17 17 0 n/a 00:17:38.435 00:17:38.435 Elapsed time = 0.000 seconds 00:17:38.435 00:17:38.435 real 0m0.606s 00:17:38.435 user 0m0.393s 00:17:38.435 sys 0m0.218s 00:17:38.435 11:10:56 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.435 ************************************ 00:17:38.435 END TEST unittest_util 00:17:38.435 ************************************ 00:17:38.435 11:10:56 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:17:38.435 11:10:56 unittest -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:17:38.435 11:10:56 unittest -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:17:38.435 11:10:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:38.435 11:10:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.435 11:10:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:38.435 ************************************ 00:17:38.435 START TEST unittest_vhost 00:17:38.435 ************************************ 00:17:38.435 11:10:56 unittest.unittest_vhost -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:17:38.435 00:17:38.435 00:17:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.435 http://cunit.sourceforge.net/ 00:17:38.435 00:17:38.435 00:17:38.435 Suite: vhost_suite 00:17:38.435 Test: desc_to_iov_test ...[2024-05-15 11:10:56.994516] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:17:38.435 passed 00:17:38.435 Test: create_controller_test ...[2024-05-15 11:10:56.997422] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:17:38.435 [2024-05-15 11:10:56.997511] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:17:38.435 [2024-05-15 11:10:56.997594] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:17:38.435 [2024-05-15 11:10:56.997653] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:17:38.435 [2024-05-15 11:10:56.997683] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:17:38.436 [2024-05-15 11:10:56.998105] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1780:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:17:38.436 passed 00:17:38.436 Test: session_find_by_vid_test ...[2024-05-15 11:10:56.998873] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:17:38.436 passed 00:17:38.436 Test: remove_controller_test ...passed 00:17:38.436 Test: vq_avail_ring_get_test ...passed 00:17:38.436 Test: vq_packed_ring_test ...passed 00:17:38.436 Test: vhost_blk_construct_test ...[2024-05-15 11:10:57.000257] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1865:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:17:38.436 passed 00:17:38.436 00:17:38.436 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.436 suites 1 1 n/a 0 0 00:17:38.436 tests 7 7 7 0 0 00:17:38.436 asserts 147 147 147 0 n/a 00:17:38.436 00:17:38.436 Elapsed time = 0.000 seconds 00:17:38.436 ************************************ 00:17:38.436 END TEST unittest_vhost 00:17:38.436 ************************************ 00:17:38.436 00:17:38.436 real 0m0.039s 00:17:38.436 user 0m0.026s 00:17:38.436 sys 0m0.013s 00:17:38.436 11:10:57 unittest.unittest_vhost -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.436 11:10:57 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:17:38.436 11:10:57 unittest -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:17:38.436 11:10:57 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:38.436 11:10:57 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.436 11:10:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:38.436 ************************************ 00:17:38.436 START TEST unittest_dma 00:17:38.436 ************************************ 00:17:38.436 11:10:57 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:17:38.695 00:17:38.695 00:17:38.695 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.695 http://cunit.sourceforge.net/ 00:17:38.695 00:17:38.695 00:17:38.695 Suite: dma_suite 00:17:38.695 Test: test_dma ...passed 00:17:38.695 00:17:38.695 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.695 suites 1 1 n/a 0 0 00:17:38.695 tests 1 1 1 0 0 00:17:38.695 asserts 54 54 54 0 n/a 00:17:38.695 00:17:38.695 Elapsed time = 0.000 seconds 00:17:38.695 [2024-05-15 11:10:57.073230] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:17:38.695 ************************************ 00:17:38.695 END TEST unittest_dma 00:17:38.695 ************************************ 00:17:38.695 00:17:38.695 real 0m0.024s 00:17:38.695 user 0m0.014s 00:17:38.695 sys 0m0.010s 00:17:38.695 11:10:57 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.695 11:10:57 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:38.695 ************************************ 00:17:38.695 START TEST unittest_init 00:17:38.695 ************************************ 00:17:38.695 11:10:57 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:17:38.695 11:10:57 unittest.unittest_init -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:17:38.695 00:17:38.695 00:17:38.695 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.695 http://cunit.sourceforge.net/ 00:17:38.695 00:17:38.695 00:17:38.695 Suite: subsystem_suite 00:17:38.695 Test: subsystem_sort_test_depends_on_single ...passed 00:17:38.695 Test: subsystem_sort_test_depends_on_multiple ...passed 00:17:38.695 Test: subsystem_sort_test_missing_dependency ...[2024-05-15 11:10:57.141800] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:17:38.695 [2024-05-15 11:10:57.142221] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:17:38.695 passed 00:17:38.695 00:17:38.695 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.695 suites 1 1 n/a 0 0 00:17:38.695 tests 3 3 3 0 0 00:17:38.695 asserts 20 20 20 0 n/a 00:17:38.695 00:17:38.695 Elapsed time = 0.000 seconds 00:17:38.695 ************************************ 00:17:38.695 END TEST unittest_init 00:17:38.695 00:17:38.695 real 0m0.030s 00:17:38.695 user 0m0.015s 00:17:38.695 sys 0m0.015s 00:17:38.695 11:10:57 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.695 11:10:57 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:17:38.695 ************************************ 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.695 11:10:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:17:38.695 ************************************ 00:17:38.695 START TEST unittest_keyring 00:17:38.695 ************************************ 00:17:38.695 11:10:57 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:17:38.695 00:17:38.695 00:17:38.695 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.695 http://cunit.sourceforge.net/ 00:17:38.695 00:17:38.695 00:17:38.695 Suite: keyring 00:17:38.695 Test: test_keyring_add_remove ...passed 00:17:38.695 Test: test_keyring_get_put ...passed 00:17:38.695 00:17:38.695 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.695 suites 1 1 n/a 0 0 00:17:38.695 tests 2 2 2 0 0 00:17:38.695 asserts 44 44 44 0 n/a 00:17:38.695 00:17:38.695 Elapsed time = 0.000 seconds 00:17:38.695 [2024-05-15 11:10:57.212308] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:17:38.695 [2024-05-15 11:10:57.212534] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:17:38.695 [2024-05-15 11:10:57.212572] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:38.695 ************************************ 00:17:38.695 END TEST unittest_keyring 00:17:38.695 ************************************ 00:17:38.695 00:17:38.695 real 0m0.024s 00:17:38.695 user 0m0.011s 00:17:38.695 sys 0m0.014s 00:17:38.695 11:10:57 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.695 11:10:57 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@291 -- # hostname 00:17:38.695 11:10:57 unittest -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:17:38.953 geninfo: WARNING: invalid characters removed from testname! 00:18:17.653 11:11:32 unittest -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:18:19.561 11:11:37 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:22.844 11:11:40 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:25.372 11:11:43 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:28.662 11:11:46 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:31.195 11:11:49 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:34.479 11:11:52 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:37.006 11:11:55 unittest -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:18:37.007 11:11:55 unittest -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:18:37.572 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:18:37.572 Found 316 entries. 00:18:37.572 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:18:37.572 Writing .css and .png files. 00:18:37.572 Generating output. 00:18:37.572 Processing file include/linux/virtio_ring.h 00:18:37.830 Processing file include/spdk/util.h 00:18:37.830 Processing file include/spdk/endian.h 00:18:37.830 Processing file include/spdk/thread.h 00:18:37.830 Processing file include/spdk/nvme.h 00:18:37.830 Processing file include/spdk/histogram_data.h 00:18:37.830 Processing file include/spdk/nvme_spec.h 00:18:37.830 Processing file include/spdk/bdev_module.h 00:18:37.830 Processing file include/spdk/trace.h 00:18:37.830 Processing file include/spdk/mmio.h 00:18:37.830 Processing file include/spdk/nvmf_transport.h 00:18:37.830 Processing file include/spdk/base64.h 00:18:37.830 Processing file include/spdk_internal/rdma.h 00:18:37.830 Processing file include/spdk_internal/nvme_tcp.h 00:18:37.830 Processing file include/spdk_internal/sock.h 00:18:37.830 Processing file include/spdk_internal/utf.h 00:18:37.830 Processing file include/spdk_internal/sgl.h 00:18:37.830 Processing file include/spdk_internal/virtio.h 00:18:38.087 Processing file lib/accel/accel_sw.c 00:18:38.087 Processing file lib/accel/accel.c 00:18:38.087 Processing file lib/accel/accel_rpc.c 00:18:38.344 Processing file lib/bdev/bdev.c 00:18:38.344 Processing file lib/bdev/bdev_zone.c 00:18:38.344 Processing file lib/bdev/part.c 00:18:38.344 Processing file lib/bdev/bdev_rpc.c 00:18:38.344 Processing file lib/bdev/scsi_nvme.c 00:18:38.602 Processing file lib/blob/blob_bs_dev.c 00:18:38.602 Processing file lib/blob/blobstore.h 00:18:38.602 Processing file lib/blob/request.c 00:18:38.602 Processing file lib/blob/blobstore.c 00:18:38.602 Processing file lib/blob/zeroes.c 00:18:38.602 Processing file lib/blobfs/blobfs.c 00:18:38.602 Processing file lib/blobfs/tree.c 00:18:38.602 Processing file lib/conf/conf.c 00:18:38.861 Processing file lib/dma/dma.c 00:18:39.119 Processing file lib/env_dpdk/pci_virtio.c 00:18:39.119 Processing file lib/env_dpdk/pci_event.c 00:18:39.119 Processing file lib/env_dpdk/pci_vmd.c 00:18:39.119 Processing file lib/env_dpdk/pci_dpdk.c 00:18:39.119 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:18:39.119 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:18:39.119 Processing file lib/env_dpdk/pci_ioat.c 00:18:39.119 Processing file lib/env_dpdk/sigbus_handler.c 00:18:39.119 Processing file lib/env_dpdk/threads.c 00:18:39.119 Processing file lib/env_dpdk/pci_idxd.c 00:18:39.119 Processing file lib/env_dpdk/memory.c 00:18:39.119 Processing file lib/env_dpdk/pci.c 00:18:39.119 Processing file lib/env_dpdk/init.c 00:18:39.119 Processing file lib/env_dpdk/env.c 00:18:39.119 Processing file lib/event/app_rpc.c 00:18:39.119 Processing file lib/event/reactor.c 00:18:39.119 Processing file lib/event/app.c 00:18:39.119 Processing file lib/event/scheduler_static.c 00:18:39.119 Processing file lib/event/log_rpc.c 00:18:39.684 Processing file lib/ftl/ftl_debug.h 00:18:39.684 Processing file lib/ftl/ftl_debug.c 00:18:39.684 Processing file lib/ftl/ftl_core.c 00:18:39.684 Processing file lib/ftl/ftl_io.c 00:18:39.684 Processing file lib/ftl/ftl_core.h 00:18:39.684 Processing file lib/ftl/ftl_io.h 00:18:39.684 Processing file lib/ftl/ftl_band.h 00:18:39.684 Processing file lib/ftl/ftl_writer.c 00:18:39.684 Processing file lib/ftl/ftl_band.c 00:18:39.684 Processing file lib/ftl/ftl_trace.c 00:18:39.684 Processing file lib/ftl/ftl_writer.h 00:18:39.684 Processing file lib/ftl/ftl_sb.c 00:18:39.684 Processing file lib/ftl/ftl_p2l.c 00:18:39.684 Processing file lib/ftl/ftl_rq.c 00:18:39.684 Processing file lib/ftl/ftl_band_ops.c 00:18:39.684 Processing file lib/ftl/ftl_init.c 00:18:39.684 Processing file lib/ftl/ftl_nv_cache_io.h 00:18:39.684 Processing file lib/ftl/ftl_nv_cache.c 00:18:39.684 Processing file lib/ftl/ftl_nv_cache.h 00:18:39.684 Processing file lib/ftl/ftl_l2p_flat.c 00:18:39.684 Processing file lib/ftl/ftl_l2p.c 00:18:39.684 Processing file lib/ftl/ftl_reloc.c 00:18:39.684 Processing file lib/ftl/ftl_l2p_cache.c 00:18:39.684 Processing file lib/ftl/ftl_layout.c 00:18:39.684 Processing file lib/ftl/base/ftl_base_bdev.c 00:18:39.684 Processing file lib/ftl/base/ftl_base_dev.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:18:39.942 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:18:39.942 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:18:39.942 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:18:39.942 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:18:39.942 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:18:39.942 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:18:39.942 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:18:40.200 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:18:40.200 Processing file lib/ftl/utils/ftl_property.h 00:18:40.200 Processing file lib/ftl/utils/ftl_bitmap.c 00:18:40.200 Processing file lib/ftl/utils/ftl_conf.c 00:18:40.200 Processing file lib/ftl/utils/ftl_df.h 00:18:40.200 Processing file lib/ftl/utils/ftl_md.c 00:18:40.200 Processing file lib/ftl/utils/ftl_addr_utils.h 00:18:40.200 Processing file lib/ftl/utils/ftl_mempool.c 00:18:40.200 Processing file lib/ftl/utils/ftl_property.c 00:18:40.200 Processing file lib/idxd/idxd.c 00:18:40.200 Processing file lib/idxd/idxd_user.c 00:18:40.200 Processing file lib/idxd/idxd_internal.h 00:18:40.504 Processing file lib/init/subsystem_rpc.c 00:18:40.504 Processing file lib/init/rpc.c 00:18:40.504 Processing file lib/init/json_config.c 00:18:40.504 Processing file lib/init/subsystem.c 00:18:40.504 Processing file lib/ioat/ioat_internal.h 00:18:40.504 Processing file lib/ioat/ioat.c 00:18:40.782 Processing file lib/iscsi/init_grp.c 00:18:40.782 Processing file lib/iscsi/task.h 00:18:40.782 Processing file lib/iscsi/iscsi_subsystem.c 00:18:40.782 Processing file lib/iscsi/conn.c 00:18:40.782 Processing file lib/iscsi/tgt_node.c 00:18:40.782 Processing file lib/iscsi/iscsi_rpc.c 00:18:40.782 Processing file lib/iscsi/portal_grp.c 00:18:40.782 Processing file lib/iscsi/iscsi.h 00:18:40.782 Processing file lib/iscsi/param.c 00:18:40.782 Processing file lib/iscsi/iscsi.c 00:18:40.782 Processing file lib/iscsi/md5.c 00:18:40.782 Processing file lib/iscsi/task.c 00:18:41.040 Processing file lib/json/json_parse.c 00:18:41.040 Processing file lib/json/json_util.c 00:18:41.040 Processing file lib/json/json_write.c 00:18:41.040 Processing file lib/jsonrpc/jsonrpc_server.c 00:18:41.040 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:18:41.040 Processing file lib/jsonrpc/jsonrpc_client.c 00:18:41.040 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:18:41.040 Processing file lib/keyring/keyring_rpc.c 00:18:41.040 Processing file lib/keyring/keyring.c 00:18:41.040 Processing file lib/log/log_flags.c 00:18:41.040 Processing file lib/log/log_deprecated.c 00:18:41.040 Processing file lib/log/log.c 00:18:41.298 Processing file lib/lvol/lvol.c 00:18:41.298 Processing file lib/nbd/nbd.c 00:18:41.298 Processing file lib/nbd/nbd_rpc.c 00:18:41.298 Processing file lib/notify/notify_rpc.c 00:18:41.298 Processing file lib/notify/notify.c 00:18:42.231 Processing file lib/nvme/nvme_cuse.c 00:18:42.232 Processing file lib/nvme/nvme_ctrlr.c 00:18:42.232 Processing file lib/nvme/nvme_poll_group.c 00:18:42.232 Processing file lib/nvme/nvme_stubs.c 00:18:42.232 Processing file lib/nvme/nvme_ns_cmd.c 00:18:42.232 Processing file lib/nvme/nvme_tcp.c 00:18:42.232 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:18:42.232 Processing file lib/nvme/nvme_discovery.c 00:18:42.232 Processing file lib/nvme/nvme_fabric.c 00:18:42.232 Processing file lib/nvme/nvme_opal.c 00:18:42.232 Processing file lib/nvme/nvme_transport.c 00:18:42.232 Processing file lib/nvme/nvme_ns.c 00:18:42.232 Processing file lib/nvme/nvme_pcie_common.c 00:18:42.232 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:18:42.232 Processing file lib/nvme/nvme_io_msg.c 00:18:42.232 Processing file lib/nvme/nvme_pcie_internal.h 00:18:42.232 Processing file lib/nvme/nvme_auth.c 00:18:42.232 Processing file lib/nvme/nvme.c 00:18:42.232 Processing file lib/nvme/nvme_pcie.c 00:18:42.232 Processing file lib/nvme/nvme_internal.h 00:18:42.232 Processing file lib/nvme/nvme_zns.c 00:18:42.232 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:18:42.232 Processing file lib/nvme/nvme_rdma.c 00:18:42.232 Processing file lib/nvme/nvme_qpair.c 00:18:42.232 Processing file lib/nvme/nvme_quirks.c 00:18:42.799 Processing file lib/nvmf/nvmf.c 00:18:42.799 Processing file lib/nvmf/nvmf_internal.h 00:18:42.799 Processing file lib/nvmf/stubs.c 00:18:42.799 Processing file lib/nvmf/nvmf_rpc.c 00:18:42.799 Processing file lib/nvmf/ctrlr.c 00:18:42.799 Processing file lib/nvmf/auth.c 00:18:42.799 Processing file lib/nvmf/subsystem.c 00:18:42.799 Processing file lib/nvmf/tcp.c 00:18:42.799 Processing file lib/nvmf/transport.c 00:18:42.799 Processing file lib/nvmf/ctrlr_bdev.c 00:18:42.799 Processing file lib/nvmf/rdma.c 00:18:42.799 Processing file lib/nvmf/ctrlr_discovery.c 00:18:42.799 Processing file lib/rdma/common.c 00:18:42.799 Processing file lib/rdma/rdma_verbs.c 00:18:42.799 Processing file lib/rpc/rpc.c 00:18:43.057 Processing file lib/scsi/port.c 00:18:43.057 Processing file lib/scsi/scsi_bdev.c 00:18:43.057 Processing file lib/scsi/lun.c 00:18:43.057 Processing file lib/scsi/scsi_pr.c 00:18:43.057 Processing file lib/scsi/task.c 00:18:43.057 Processing file lib/scsi/dev.c 00:18:43.057 Processing file lib/scsi/scsi.c 00:18:43.057 Processing file lib/scsi/scsi_rpc.c 00:18:43.057 Processing file lib/sock/sock_rpc.c 00:18:43.057 Processing file lib/sock/sock.c 00:18:43.316 Processing file lib/thread/thread.c 00:18:43.316 Processing file lib/thread/iobuf.c 00:18:43.316 Processing file lib/trace/trace_rpc.c 00:18:43.316 Processing file lib/trace/trace_flags.c 00:18:43.316 Processing file lib/trace/trace.c 00:18:43.316 Processing file lib/trace_parser/trace.cpp 00:18:43.574 Processing file lib/ut/ut.c 00:18:43.574 Processing file lib/ut_mock/mock.c 00:18:43.885 Processing file lib/util/string.c 00:18:43.885 Processing file lib/util/strerror_tls.c 00:18:43.885 Processing file lib/util/hexlify.c 00:18:43.885 Processing file lib/util/uuid.c 00:18:43.885 Processing file lib/util/fd_group.c 00:18:43.885 Processing file lib/util/crc16.c 00:18:43.885 Processing file lib/util/xor.c 00:18:43.885 Processing file lib/util/math.c 00:18:43.885 Processing file lib/util/dif.c 00:18:43.885 Processing file lib/util/bit_array.c 00:18:43.885 Processing file lib/util/fd.c 00:18:43.885 Processing file lib/util/iov.c 00:18:43.885 Processing file lib/util/crc64.c 00:18:43.885 Processing file lib/util/cpuset.c 00:18:43.885 Processing file lib/util/zipf.c 00:18:43.885 Processing file lib/util/crc32.c 00:18:43.885 Processing file lib/util/crc32c.c 00:18:43.885 Processing file lib/util/crc32_ieee.c 00:18:43.885 Processing file lib/util/file.c 00:18:43.885 Processing file lib/util/pipe.c 00:18:43.885 Processing file lib/util/base64.c 00:18:43.885 Processing file lib/vfio_user/host/vfio_user_pci.c 00:18:43.885 Processing file lib/vfio_user/host/vfio_user.c 00:18:44.143 Processing file lib/vhost/rte_vhost_user.c 00:18:44.143 Processing file lib/vhost/vhost_rpc.c 00:18:44.143 Processing file lib/vhost/vhost_blk.c 00:18:44.143 Processing file lib/vhost/vhost_scsi.c 00:18:44.143 Processing file lib/vhost/vhost.c 00:18:44.143 Processing file lib/vhost/vhost_internal.h 00:18:44.402 Processing file lib/virtio/virtio_vfio_user.c 00:18:44.402 Processing file lib/virtio/virtio.c 00:18:44.402 Processing file lib/virtio/virtio_pci.c 00:18:44.402 Processing file lib/virtio/virtio_vhost_user.c 00:18:44.402 Processing file lib/vmd/vmd.c 00:18:44.402 Processing file lib/vmd/led.c 00:18:44.402 Processing file module/accel/dsa/accel_dsa.c 00:18:44.402 Processing file module/accel/dsa/accel_dsa_rpc.c 00:18:44.667 Processing file module/accel/error/accel_error_rpc.c 00:18:44.667 Processing file module/accel/error/accel_error.c 00:18:44.667 Processing file module/accel/iaa/accel_iaa.c 00:18:44.668 Processing file module/accel/iaa/accel_iaa_rpc.c 00:18:44.668 Processing file module/accel/ioat/accel_ioat.c 00:18:44.668 Processing file module/accel/ioat/accel_ioat_rpc.c 00:18:44.668 Processing file module/bdev/aio/bdev_aio.c 00:18:44.668 Processing file module/bdev/aio/bdev_aio_rpc.c 00:18:44.934 Processing file module/bdev/daos/bdev_daos_rpc.c 00:18:44.934 Processing file module/bdev/daos/bdev_daos.c 00:18:44.934 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:18:44.934 Processing file module/bdev/delay/vbdev_delay.c 00:18:44.934 Processing file module/bdev/error/vbdev_error_rpc.c 00:18:44.934 Processing file module/bdev/error/vbdev_error.c 00:18:45.193 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:18:45.193 Processing file module/bdev/ftl/bdev_ftl.c 00:18:45.193 Processing file module/bdev/gpt/vbdev_gpt.c 00:18:45.193 Processing file module/bdev/gpt/gpt.c 00:18:45.193 Processing file module/bdev/gpt/gpt.h 00:18:45.193 Processing file module/bdev/lvol/vbdev_lvol.c 00:18:45.193 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:18:45.451 Processing file module/bdev/malloc/bdev_malloc.c 00:18:45.451 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:18:45.451 Processing file module/bdev/null/bdev_null_rpc.c 00:18:45.451 Processing file module/bdev/null/bdev_null.c 00:18:45.709 Processing file module/bdev/nvme/bdev_mdns_client.c 00:18:45.709 Processing file module/bdev/nvme/bdev_nvme.c 00:18:45.709 Processing file module/bdev/nvme/vbdev_opal.c 00:18:45.709 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:18:45.709 Processing file module/bdev/nvme/nvme_rpc.c 00:18:45.709 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:18:45.709 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:18:45.967 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:18:45.967 Processing file module/bdev/passthru/vbdev_passthru.c 00:18:46.225 Processing file module/bdev/raid/raid0.c 00:18:46.225 Processing file module/bdev/raid/bdev_raid_rpc.c 00:18:46.225 Processing file module/bdev/raid/bdev_raid.h 00:18:46.225 Processing file module/bdev/raid/concat.c 00:18:46.225 Processing file module/bdev/raid/raid1.c 00:18:46.225 Processing file module/bdev/raid/bdev_raid_sb.c 00:18:46.225 Processing file module/bdev/raid/bdev_raid.c 00:18:46.225 Processing file module/bdev/split/vbdev_split.c 00:18:46.225 Processing file module/bdev/split/vbdev_split_rpc.c 00:18:46.225 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:18:46.225 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:18:46.225 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:18:46.484 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:18:46.484 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:18:46.484 Processing file module/blob/bdev/blob_bdev.c 00:18:46.484 Processing file module/blobfs/bdev/blobfs_bdev.c 00:18:46.484 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:18:46.484 Processing file module/env_dpdk/env_dpdk_rpc.c 00:18:46.742 Processing file module/event/subsystems/accel/accel.c 00:18:46.743 Processing file module/event/subsystems/bdev/bdev.c 00:18:46.743 Processing file module/event/subsystems/iobuf/iobuf.c 00:18:46.743 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:18:46.743 Processing file module/event/subsystems/iscsi/iscsi.c 00:18:46.743 Processing file module/event/subsystems/keyring/keyring.c 00:18:47.001 Processing file module/event/subsystems/nbd/nbd.c 00:18:47.001 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:18:47.001 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:18:47.001 Processing file module/event/subsystems/scheduler/scheduler.c 00:18:47.001 Processing file module/event/subsystems/scsi/scsi.c 00:18:47.001 Processing file module/event/subsystems/sock/sock.c 00:18:47.259 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:18:47.259 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:18:47.259 Processing file module/event/subsystems/vmd/vmd.c 00:18:47.259 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:18:47.259 Processing file module/keyring/file/keyring_rpc.c 00:18:47.259 Processing file module/keyring/file/keyring.c 00:18:47.517 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:18:47.517 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:18:47.517 Processing file module/scheduler/gscheduler/gscheduler.c 00:18:47.517 Processing file module/sock/sock_kernel.h 00:18:47.776 Processing file module/sock/posix/posix.c 00:18:47.776 Writing directory view page. 00:18:47.776 Overall coverage rate: 00:18:47.776 lines......: 38.8% (39608 of 101972 lines) 00:18:47.776 functions..: 42.4% (3617 of 8531 functions) 00:18:47.776 00:18:47.776 00:18:47.776 ===================== 00:18:47.776 All unit tests passed 00:18:47.776 ===================== 00:18:47.776 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:18:47.776 11:12:06 unittest -- unit/unittest.sh@303 -- # set +x 00:18:47.776 00:18:47.776 00:18:47.776 00:18:47.776 real 2m43.696s 00:18:47.776 user 2m21.624s 00:18:47.776 sys 0m12.703s 00:18:47.776 ************************************ 00:18:47.776 11:12:06 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:47.776 11:12:06 unittest -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 END TEST unittest 00:18:47.776 ************************************ 00:18:47.776 11:12:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:18:47.776 11:12:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:18:47.776 11:12:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:18:47.776 11:12:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:18:47.776 11:12:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:47.776 11:12:06 -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 11:12:06 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:47.776 11:12:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:47.776 11:12:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.776 11:12:06 -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 ************************************ 00:18:47.776 START TEST env 00:18:47.776 ************************************ 00:18:47.776 11:12:06 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:47.776 * Looking for test storage... 00:18:47.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:18:47.776 11:12:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:47.776 11:12:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:47.776 11:12:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.776 11:12:06 env -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 ************************************ 00:18:47.776 START TEST env_memory 00:18:47.776 ************************************ 00:18:47.776 11:12:06 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:47.776 00:18:47.776 00:18:47.776 CUnit - A unit testing framework for C - Version 2.1-3 00:18:47.776 http://cunit.sourceforge.net/ 00:18:47.776 00:18:47.776 00:18:47.776 Suite: memory 00:18:48.035 Test: alloc and free memory map ...[2024-05-15 11:12:06.425486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:18:48.035 passed 00:18:48.035 Test: mem map translation ...[2024-05-15 11:12:06.460749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:18:48.035 [2024-05-15 11:12:06.461056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:18:48.035 [2024-05-15 11:12:06.461248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:18:48.035 [2024-05-15 11:12:06.461445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:18:48.035 passed 00:18:48.035 Test: mem map registration ...[2024-05-15 11:12:06.530173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:18:48.035 [2024-05-15 11:12:06.530306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:18:48.035 passed 00:18:48.035 Test: mem map adjacent registrations ...passed 00:18:48.035 00:18:48.035 Run Summary: Type Total Ran Passed Failed Inactive 00:18:48.035 suites 1 1 n/a 0 0 00:18:48.035 tests 4 4 4 0 0 00:18:48.035 asserts 152 152 152 0 n/a 00:18:48.035 00:18:48.035 Elapsed time = 0.180 seconds 00:18:48.035 00:18:48.035 real 0m0.211s 00:18:48.035 user 0m0.191s 00:18:48.035 sys 0m0.020s 00:18:48.035 11:12:06 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.035 ************************************ 00:18:48.035 END TEST env_memory 00:18:48.035 ************************************ 00:18:48.035 11:12:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:18:48.035 11:12:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:48.035 11:12:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:48.035 11:12:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.035 11:12:06 env -- common/autotest_common.sh@10 -- # set +x 00:18:48.035 ************************************ 00:18:48.035 START TEST env_vtophys 00:18:48.035 ************************************ 00:18:48.035 11:12:06 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:48.293 EAL: lib.eal log level changed from notice to debug 00:18:48.293 EAL: Detected lcore 0 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 1 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 2 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 3 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 4 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 5 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 6 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 7 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 8 as core 0 on socket 0 00:18:48.293 EAL: Detected lcore 9 as core 0 on socket 0 00:18:48.293 EAL: Maximum logical cores by configuration: 128 00:18:48.293 EAL: Detected CPU lcores: 10 00:18:48.293 EAL: Detected NUMA nodes: 1 00:18:48.293 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:18:48.293 EAL: Checking presence of .so 'librte_eal.so.24' 00:18:48.293 EAL: Checking presence of .so 'librte_eal.so' 00:18:48.293 EAL: Detected static linkage of DPDK 00:18:48.293 EAL: No shared files mode enabled, IPC will be disabled 00:18:48.293 EAL: Selected IOVA mode 'PA' 00:18:48.293 EAL: Probing VFIO support... 00:18:48.293 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:48.293 EAL: VFIO modules not loaded, skipping VFIO support... 00:18:48.293 EAL: Ask a virtual area of 0x2e000 bytes 00:18:48.293 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:18:48.293 EAL: Setting up physically contiguous memory... 00:18:48.293 EAL: Setting maximum number of open files to 4096 00:18:48.293 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:18:48.293 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:18:48.293 EAL: Ask a virtual area of 0x61000 bytes 00:18:48.293 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:18:48.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:48.293 EAL: Ask a virtual area of 0x400000000 bytes 00:18:48.293 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:18:48.293 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:18:48.293 EAL: Ask a virtual area of 0x61000 bytes 00:18:48.293 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:18:48.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:48.293 EAL: Ask a virtual area of 0x400000000 bytes 00:18:48.293 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:18:48.294 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:18:48.294 EAL: Ask a virtual area of 0x61000 bytes 00:18:48.294 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:18:48.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:48.294 EAL: Ask a virtual area of 0x400000000 bytes 00:18:48.294 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:18:48.294 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:18:48.294 EAL: Ask a virtual area of 0x61000 bytes 00:18:48.294 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:18:48.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:48.294 EAL: Ask a virtual area of 0x400000000 bytes 00:18:48.294 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:18:48.294 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:18:48.294 EAL: Hugepages will be freed exactly as allocated. 00:18:48.294 EAL: No shared files mode enabled, IPC is disabled 00:18:48.294 EAL: No shared files mode enabled, IPC is disabled 00:18:48.552 EAL: TSC frequency is ~2200000 KHz 00:18:48.552 EAL: Main lcore 0 is ready (tid=7f8588995180;cpuset=[0]) 00:18:48.552 EAL: Trying to obtain current memory policy. 00:18:48.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:48.552 EAL: Restoring previous memory policy: 0 00:18:48.552 EAL: request: mp_malloc_sync 00:18:48.552 EAL: No shared files mode enabled, IPC is disabled 00:18:48.552 EAL: Heap on socket 0 was expanded by 2MB 00:18:48.552 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:48.552 EAL: Mem event callback 'spdk:(nil)' registered 00:18:48.552 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:18:48.552 00:18:48.552 00:18:48.552 CUnit - A unit testing framework for C - Version 2.1-3 00:18:48.552 http://cunit.sourceforge.net/ 00:18:48.552 00:18:48.552 00:18:48.552 Suite: components_suite 00:18:48.810 Test: vtophys_malloc_test ...passed 00:18:48.810 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:18:48.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:48.810 EAL: Restoring previous memory policy: 0 00:18:48.810 EAL: Calling mem event callback 'spdk:(nil)' 00:18:48.810 EAL: request: mp_malloc_sync 00:18:48.810 EAL: No shared files mode enabled, IPC is disabled 00:18:48.810 EAL: Heap on socket 0 was expanded by 4MB 00:18:48.810 EAL: Calling mem event callback 'spdk:(nil)' 00:18:48.810 EAL: request: mp_malloc_sync 00:18:48.810 EAL: No shared files mode enabled, IPC is disabled 00:18:48.810 EAL: Heap on socket 0 was shrunk by 4MB 00:18:48.810 EAL: Trying to obtain current memory policy. 00:18:48.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:48.810 EAL: Restoring previous memory policy: 0 00:18:48.810 EAL: Calling mem event callback 'spdk:(nil)' 00:18:48.810 EAL: request: mp_malloc_sync 00:18:48.810 EAL: No shared files mode enabled, IPC is disabled 00:18:48.810 EAL: Heap on socket 0 was expanded by 6MB 00:18:48.810 EAL: Calling mem event callback 'spdk:(nil)' 00:18:48.810 EAL: request: mp_malloc_sync 00:18:48.810 EAL: No shared files mode enabled, IPC is disabled 00:18:48.810 EAL: Heap on socket 0 was shrunk by 6MB 00:18:48.810 EAL: Trying to obtain current memory policy. 00:18:48.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:48.810 EAL: Restoring previous memory policy: 0 00:18:48.810 EAL: Calling mem event callback 'spdk:(nil)' 00:18:48.810 EAL: request: mp_malloc_sync 00:18:48.810 EAL: No shared files mode enabled, IPC is disabled 00:18:48.810 EAL: Heap on socket 0 was expanded by 10MB 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was shrunk by 10MB 00:18:49.068 EAL: Trying to obtain current memory policy. 00:18:49.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:49.068 EAL: Restoring previous memory policy: 0 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was expanded by 18MB 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was shrunk by 18MB 00:18:49.068 EAL: Trying to obtain current memory policy. 00:18:49.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:49.068 EAL: Restoring previous memory policy: 0 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was expanded by 34MB 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was shrunk by 34MB 00:18:49.068 EAL: Trying to obtain current memory policy. 00:18:49.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:49.068 EAL: Restoring previous memory policy: 0 00:18:49.068 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.068 EAL: request: mp_malloc_sync 00:18:49.068 EAL: No shared files mode enabled, IPC is disabled 00:18:49.068 EAL: Heap on socket 0 was expanded by 66MB 00:18:49.326 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.326 EAL: request: mp_malloc_sync 00:18:49.326 EAL: No shared files mode enabled, IPC is disabled 00:18:49.326 EAL: Heap on socket 0 was shrunk by 66MB 00:18:49.326 EAL: Trying to obtain current memory policy. 00:18:49.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:49.326 EAL: Restoring previous memory policy: 0 00:18:49.326 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.326 EAL: request: mp_malloc_sync 00:18:49.326 EAL: No shared files mode enabled, IPC is disabled 00:18:49.326 EAL: Heap on socket 0 was expanded by 130MB 00:18:49.584 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.584 EAL: request: mp_malloc_sync 00:18:49.584 EAL: No shared files mode enabled, IPC is disabled 00:18:49.584 EAL: Heap on socket 0 was shrunk by 130MB 00:18:49.841 EAL: Trying to obtain current memory policy. 00:18:49.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:49.841 EAL: Restoring previous memory policy: 0 00:18:49.841 EAL: Calling mem event callback 'spdk:(nil)' 00:18:49.841 EAL: request: mp_malloc_sync 00:18:49.841 EAL: No shared files mode enabled, IPC is disabled 00:18:49.841 EAL: Heap on socket 0 was expanded by 258MB 00:18:50.430 EAL: Calling mem event callback 'spdk:(nil)' 00:18:50.430 EAL: request: mp_malloc_sync 00:18:50.430 EAL: No shared files mode enabled, IPC is disabled 00:18:50.430 EAL: Heap on socket 0 was shrunk by 258MB 00:18:50.703 EAL: Trying to obtain current memory policy. 00:18:50.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:50.703 EAL: Restoring previous memory policy: 0 00:18:50.703 EAL: Calling mem event callback 'spdk:(nil)' 00:18:50.703 EAL: request: mp_malloc_sync 00:18:50.703 EAL: No shared files mode enabled, IPC is disabled 00:18:50.703 EAL: Heap on socket 0 was expanded by 514MB 00:18:51.637 EAL: Calling mem event callback 'spdk:(nil)' 00:18:51.637 EAL: request: mp_malloc_sync 00:18:51.637 EAL: No shared files mode enabled, IPC is disabled 00:18:51.637 EAL: Heap on socket 0 was shrunk by 514MB 00:18:52.571 EAL: Trying to obtain current memory policy. 00:18:52.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:52.571 EAL: Restoring previous memory policy: 0 00:18:52.571 EAL: Calling mem event callback 'spdk:(nil)' 00:18:52.571 EAL: request: mp_malloc_sync 00:18:52.571 EAL: No shared files mode enabled, IPC is disabled 00:18:52.571 EAL: Heap on socket 0 was expanded by 1026MB 00:18:54.484 EAL: Calling mem event callback 'spdk:(nil)' 00:18:54.484 EAL: request: mp_malloc_sync 00:18:54.484 EAL: No shared files mode enabled, IPC is disabled 00:18:54.484 EAL: Heap on socket 0 was shrunk by 1026MB 00:18:55.858 passed 00:18:55.858 00:18:55.858 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.858 suites 1 1 n/a 0 0 00:18:55.858 tests 2 2 2 0 0 00:18:55.858 asserts 6748 6748 6748 0 n/a 00:18:55.858 00:18:55.858 Elapsed time = 7.370 seconds 00:18:55.858 EAL: Calling mem event callback 'spdk:(nil)' 00:18:55.858 EAL: request: mp_malloc_sync 00:18:55.858 EAL: No shared files mode enabled, IPC is disabled 00:18:55.858 EAL: Heap on socket 0 was shrunk by 2MB 00:18:55.858 EAL: No shared files mode enabled, IPC is disabled 00:18:55.858 EAL: No shared files mode enabled, IPC is disabled 00:18:55.858 EAL: No shared files mode enabled, IPC is disabled 00:18:55.858 00:18:55.858 real 0m7.734s 00:18:55.858 user 0m6.513s 00:18:55.858 sys 0m1.012s 00:18:55.858 11:12:14 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:55.858 ************************************ 00:18:55.858 END TEST env_vtophys 00:18:55.858 ************************************ 00:18:55.858 11:12:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:18:55.858 11:12:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:55.858 11:12:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:55.858 11:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:55.858 11:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:55.858 ************************************ 00:18:55.858 START TEST env_pci 00:18:55.858 ************************************ 00:18:55.858 11:12:14 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:55.858 00:18:55.858 00:18:55.858 CUnit - A unit testing framework for C - Version 2.1-3 00:18:55.858 http://cunit.sourceforge.net/ 00:18:55.858 00:18:55.858 00:18:55.858 Suite: pci 00:18:55.858 Test: pci_hook ...[2024-05-15 11:12:14.468200] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 46018 has claimed it 00:18:56.116 passed 00:18:56.116 00:18:56.116 EAL: Cannot find device (10000:00:01.0) 00:18:56.116 EAL: Failed to attach device on primary process 00:18:56.116 Run Summary: Type Total Ran Passed Failed Inactive 00:18:56.116 suites 1 1 n/a 0 0 00:18:56.116 tests 1 1 1 0 0 00:18:56.116 asserts 25 25 25 0 n/a 00:18:56.116 00:18:56.116 Elapsed time = 0.010 seconds 00:18:56.116 00:18:56.116 real 0m0.074s 00:18:56.116 user 0m0.035s 00:18:56.116 sys 0m0.040s 00:18:56.116 11:12:14 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.116 ************************************ 00:18:56.116 END TEST env_pci 00:18:56.116 11:12:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:18:56.116 ************************************ 00:18:56.116 11:12:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:18:56.116 11:12:14 env -- env/env.sh@15 -- # uname 00:18:56.116 11:12:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:18:56.116 11:12:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:18:56.116 11:12:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:56.116 11:12:14 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:56.116 11:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.116 11:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:56.116 ************************************ 00:18:56.116 START TEST env_dpdk_post_init 00:18:56.116 ************************************ 00:18:56.116 11:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:56.116 EAL: Detected CPU lcores: 10 00:18:56.116 EAL: Detected NUMA nodes: 1 00:18:56.116 EAL: Detected static linkage of DPDK 00:18:56.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:56.116 EAL: Selected IOVA mode 'PA' 00:18:56.374 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:56.374 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket 0) 00:18:56.374 Starting DPDK initialization... 00:18:56.374 Starting SPDK post initialization... 00:18:56.374 SPDK NVMe probe 00:18:56.374 Attaching to 0000:00:10.0 00:18:56.374 Attached to 0000:00:10.0 00:18:56.374 Cleaning up... 00:18:56.374 00:18:56.374 real 0m0.338s 00:18:56.374 user 0m0.060s 00:18:56.374 sys 0m0.081s 00:18:56.374 11:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.374 11:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 ************************************ 00:18:56.374 END TEST env_dpdk_post_init 00:18:56.374 ************************************ 00:18:56.374 11:12:14 env -- env/env.sh@26 -- # uname 00:18:56.374 11:12:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:18:56.374 11:12:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:56.374 11:12:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:56.374 11:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.374 11:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 ************************************ 00:18:56.374 START TEST env_mem_callbacks 00:18:56.374 ************************************ 00:18:56.374 11:12:14 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:56.374 EAL: Detected CPU lcores: 10 00:18:56.374 EAL: Detected NUMA nodes: 1 00:18:56.374 EAL: Detected static linkage of DPDK 00:18:56.632 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:56.632 EAL: Selected IOVA mode 'PA' 00:18:56.632 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:56.632 00:18:56.632 00:18:56.632 CUnit - A unit testing framework for C - Version 2.1-3 00:18:56.632 http://cunit.sourceforge.net/ 00:18:56.632 00:18:56.632 00:18:56.632 Suite: memory 00:18:56.632 Test: test ... 00:18:56.632 register 0x200000200000 2097152 00:18:56.632 malloc 3145728 00:18:56.632 register 0x200000400000 4194304 00:18:56.632 buf 0x2000004fffc0 len 3145728 PASSED 00:18:56.632 malloc 64 00:18:56.632 buf 0x2000004ffec0 len 64 PASSED 00:18:56.632 malloc 4194304 00:18:56.632 register 0x200000800000 6291456 00:18:56.632 buf 0x2000009fffc0 len 4194304 PASSED 00:18:56.632 free 0x2000004fffc0 3145728 00:18:56.632 free 0x2000004ffec0 64 00:18:56.632 unregister 0x200000400000 4194304 PASSED 00:18:56.632 free 0x2000009fffc0 4194304 00:18:56.632 unregister 0x200000800000 6291456 PASSED 00:18:56.632 malloc 8388608 00:18:56.632 register 0x200000400000 10485760 00:18:56.632 buf 0x2000005fffc0 len 8388608 PASSED 00:18:56.632 free 0x2000005fffc0 8388608 00:18:56.632 unregister 0x200000400000 10485760 PASSED 00:18:56.632 passed 00:18:56.632 00:18:56.632 Run Summary: Type Total Ran Passed Failed Inactive 00:18:56.632 suites 1 1 n/a 0 0 00:18:56.632 tests 1 1 1 0 0 00:18:56.632 asserts 15 15 15 0 n/a 00:18:56.632 00:18:56.632 Elapsed time = 0.060 seconds 00:18:56.632 00:18:56.632 real 0m0.259s 00:18:56.632 user 0m0.097s 00:18:56.632 sys 0m0.062s 00:18:56.632 11:12:15 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.632 ************************************ 00:18:56.632 END TEST env_mem_callbacks 00:18:56.632 ************************************ 00:18:56.632 11:12:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:18:56.632 00:18:56.632 real 0m8.958s 00:18:56.632 user 0m7.006s 00:18:56.632 sys 0m1.418s 00:18:56.632 11:12:15 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.632 ************************************ 00:18:56.632 END TEST env 00:18:56.632 ************************************ 00:18:56.632 11:12:15 env -- common/autotest_common.sh@10 -- # set +x 00:18:56.891 11:12:15 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:56.891 11:12:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:56.891 11:12:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.891 11:12:15 -- common/autotest_common.sh@10 -- # set +x 00:18:56.891 ************************************ 00:18:56.891 START TEST rpc 00:18:56.891 ************************************ 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:56.891 * Looking for test storage... 00:18:56.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:56.891 11:12:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=46154 00:18:56.891 11:12:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.891 11:12:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:18:56.891 11:12:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 46154 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@827 -- # '[' -z 46154 ']' 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:56.891 11:12:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.891 [2024-05-15 11:12:15.510970] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:18:56.891 [2024-05-15 11:12:15.511142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46154 ] 00:18:57.149 [2024-05-15 11:12:15.664785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.407 [2024-05-15 11:12:15.891126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:18:57.407 [2024-05-15 11:12:15.891201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 46154' to capture a snapshot of events at runtime. 00:18:57.407 [2024-05-15 11:12:15.891234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.407 [2024-05-15 11:12:15.891255] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.407 [2024-05-15 11:12:15.891294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid46154 for offline analysis/debug. 00:18:57.407 [2024-05-15 11:12:15.891351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.342 11:12:16 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:58.342 11:12:16 rpc -- common/autotest_common.sh@860 -- # return 0 00:18:58.342 11:12:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:58.342 11:12:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:58.342 11:12:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:18:58.342 11:12:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:18:58.342 11:12:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:58.342 11:12:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.342 11:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 ************************************ 00:18:58.342 START TEST rpc_integrity 00:18:58.342 ************************************ 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:58.342 { 00:18:58.342 "name": "Malloc0", 00:18:58.342 "aliases": [ 00:18:58.342 "4342269d-74f9-4b74-aa7f-c83ff551ca55" 00:18:58.342 ], 00:18:58.342 "product_name": "Malloc disk", 00:18:58.342 "block_size": 512, 00:18:58.342 "num_blocks": 16384, 00:18:58.342 "uuid": "4342269d-74f9-4b74-aa7f-c83ff551ca55", 00:18:58.342 "assigned_rate_limits": { 00:18:58.342 "rw_ios_per_sec": 0, 00:18:58.342 "rw_mbytes_per_sec": 0, 00:18:58.342 "r_mbytes_per_sec": 0, 00:18:58.342 "w_mbytes_per_sec": 0 00:18:58.342 }, 00:18:58.342 "claimed": false, 00:18:58.342 "zoned": false, 00:18:58.342 "supported_io_types": { 00:18:58.342 "read": true, 00:18:58.342 "write": true, 00:18:58.342 "unmap": true, 00:18:58.342 "write_zeroes": true, 00:18:58.342 "flush": true, 00:18:58.342 "reset": true, 00:18:58.342 "compare": false, 00:18:58.342 "compare_and_write": false, 00:18:58.342 "abort": true, 00:18:58.342 "nvme_admin": false, 00:18:58.342 "nvme_io": false 00:18:58.342 }, 00:18:58.342 "memory_domains": [ 00:18:58.342 { 00:18:58.342 "dma_device_id": "system", 00:18:58.342 "dma_device_type": 1 00:18:58.342 }, 00:18:58.342 { 00:18:58.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.342 "dma_device_type": 2 00:18:58.342 } 00:18:58.342 ], 00:18:58.342 "driver_specific": {} 00:18:58.342 } 00:18:58.342 ]' 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 [2024-05-15 11:12:16.893552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:18:58.342 [2024-05-15 11:12:16.893648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.342 [2024-05-15 11:12:16.893701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028880 00:18:58.342 [2024-05-15 11:12:16.893731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.342 [2024-05-15 11:12:16.895528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.342 [2024-05-15 11:12:16.895584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:58.342 Passthru0 00:18:58.342 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.342 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:58.343 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.343 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.343 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.343 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:58.343 { 00:18:58.343 "name": "Malloc0", 00:18:58.343 "aliases": [ 00:18:58.343 "4342269d-74f9-4b74-aa7f-c83ff551ca55" 00:18:58.343 ], 00:18:58.343 "product_name": "Malloc disk", 00:18:58.343 "block_size": 512, 00:18:58.343 "num_blocks": 16384, 00:18:58.343 "uuid": "4342269d-74f9-4b74-aa7f-c83ff551ca55", 00:18:58.343 "assigned_rate_limits": { 00:18:58.343 "rw_ios_per_sec": 0, 00:18:58.343 "rw_mbytes_per_sec": 0, 00:18:58.343 "r_mbytes_per_sec": 0, 00:18:58.343 "w_mbytes_per_sec": 0 00:18:58.343 }, 00:18:58.343 "claimed": true, 00:18:58.343 "claim_type": "exclusive_write", 00:18:58.343 "zoned": false, 00:18:58.343 "supported_io_types": { 00:18:58.343 "read": true, 00:18:58.343 "write": true, 00:18:58.343 "unmap": true, 00:18:58.343 "write_zeroes": true, 00:18:58.343 "flush": true, 00:18:58.343 "reset": true, 00:18:58.343 "compare": false, 00:18:58.343 "compare_and_write": false, 00:18:58.343 "abort": true, 00:18:58.343 "nvme_admin": false, 00:18:58.343 "nvme_io": false 00:18:58.343 }, 00:18:58.343 "memory_domains": [ 00:18:58.343 { 00:18:58.343 "dma_device_id": "system", 00:18:58.343 "dma_device_type": 1 00:18:58.343 }, 00:18:58.343 { 00:18:58.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.343 "dma_device_type": 2 00:18:58.343 } 00:18:58.343 ], 00:18:58.343 "driver_specific": {} 00:18:58.343 }, 00:18:58.343 { 00:18:58.343 "name": "Passthru0", 00:18:58.343 "aliases": [ 00:18:58.343 "521c24f9-adaa-5bd2-9121-002135e71998" 00:18:58.343 ], 00:18:58.343 "product_name": "passthru", 00:18:58.343 "block_size": 512, 00:18:58.343 "num_blocks": 16384, 00:18:58.343 "uuid": "521c24f9-adaa-5bd2-9121-002135e71998", 00:18:58.343 "assigned_rate_limits": { 00:18:58.343 "rw_ios_per_sec": 0, 00:18:58.343 "rw_mbytes_per_sec": 0, 00:18:58.343 "r_mbytes_per_sec": 0, 00:18:58.343 "w_mbytes_per_sec": 0 00:18:58.343 }, 00:18:58.343 "claimed": false, 00:18:58.343 "zoned": false, 00:18:58.343 "supported_io_types": { 00:18:58.343 "read": true, 00:18:58.343 "write": true, 00:18:58.343 "unmap": true, 00:18:58.343 "write_zeroes": true, 00:18:58.343 "flush": true, 00:18:58.343 "reset": true, 00:18:58.343 "compare": false, 00:18:58.343 "compare_and_write": false, 00:18:58.343 "abort": true, 00:18:58.343 "nvme_admin": false, 00:18:58.343 "nvme_io": false 00:18:58.343 }, 00:18:58.343 "memory_domains": [ 00:18:58.343 { 00:18:58.343 "dma_device_id": "system", 00:18:58.343 "dma_device_type": 1 00:18:58.343 }, 00:18:58.343 { 00:18:58.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.343 "dma_device_type": 2 00:18:58.343 } 00:18:58.343 ], 00:18:58.343 "driver_specific": { 00:18:58.343 "passthru": { 00:18:58.343 "name": "Passthru0", 00:18:58.343 "base_bdev_name": "Malloc0" 00:18:58.343 } 00:18:58.343 } 00:18:58.343 } 00:18:58.343 ]' 00:18:58.343 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:58.343 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:58.343 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:58.343 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.343 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:58.602 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:58.602 11:12:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:58.602 11:12:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:58.602 00:18:58.602 real 0m0.359s 00:18:58.602 user 0m0.236s 00:18:58.602 sys 0m0.032s 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.602 ************************************ 00:18:58.602 END TEST rpc_integrity 00:18:58.602 ************************************ 00:18:58.602 11:12:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:18:58.602 11:12:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:58.602 11:12:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.602 11:12:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 ************************************ 00:18:58.602 START TEST rpc_plugins 00:18:58.602 ************************************ 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:18:58.602 { 00:18:58.602 "name": "Malloc1", 00:18:58.602 "aliases": [ 00:18:58.602 "971255d6-f2d0-431f-9b29-69d8800da7ee" 00:18:58.602 ], 00:18:58.602 "product_name": "Malloc disk", 00:18:58.602 "block_size": 4096, 00:18:58.602 "num_blocks": 256, 00:18:58.602 "uuid": "971255d6-f2d0-431f-9b29-69d8800da7ee", 00:18:58.602 "assigned_rate_limits": { 00:18:58.602 "rw_ios_per_sec": 0, 00:18:58.602 "rw_mbytes_per_sec": 0, 00:18:58.602 "r_mbytes_per_sec": 0, 00:18:58.602 "w_mbytes_per_sec": 0 00:18:58.602 }, 00:18:58.602 "claimed": false, 00:18:58.602 "zoned": false, 00:18:58.602 "supported_io_types": { 00:18:58.602 "read": true, 00:18:58.602 "write": true, 00:18:58.602 "unmap": true, 00:18:58.602 "write_zeroes": true, 00:18:58.602 "flush": true, 00:18:58.602 "reset": true, 00:18:58.602 "compare": false, 00:18:58.602 "compare_and_write": false, 00:18:58.602 "abort": true, 00:18:58.602 "nvme_admin": false, 00:18:58.602 "nvme_io": false 00:18:58.602 }, 00:18:58.602 "memory_domains": [ 00:18:58.602 { 00:18:58.602 "dma_device_id": "system", 00:18:58.602 "dma_device_type": 1 00:18:58.602 }, 00:18:58.602 { 00:18:58.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.602 "dma_device_type": 2 00:18:58.602 } 00:18:58.602 ], 00:18:58.602 "driver_specific": {} 00:18:58.602 } 00:18:58.602 ]' 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:18:58.602 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:18:58.861 ************************************ 00:18:58.861 END TEST rpc_plugins 00:18:58.861 ************************************ 00:18:58.861 11:12:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:18:58.861 00:18:58.861 real 0m0.162s 00:18:58.861 user 0m0.120s 00:18:58.861 sys 0m0.013s 00:18:58.861 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.861 11:12:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:58.861 11:12:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:18:58.861 11:12:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:58.861 11:12:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.861 11:12:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.861 ************************************ 00:18:58.861 START TEST rpc_trace_cmd_test 00:18:58.861 ************************************ 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:18:58.861 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid46154", 00:18:58.861 "tpoint_group_mask": "0x8", 00:18:58.861 "iscsi_conn": { 00:18:58.861 "mask": "0x2", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "scsi": { 00:18:58.861 "mask": "0x4", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "bdev": { 00:18:58.861 "mask": "0x8", 00:18:58.861 "tpoint_mask": "0xffffffffffffffff" 00:18:58.861 }, 00:18:58.861 "nvmf_rdma": { 00:18:58.861 "mask": "0x10", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "nvmf_tcp": { 00:18:58.861 "mask": "0x20", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "ftl": { 00:18:58.861 "mask": "0x40", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "blobfs": { 00:18:58.861 "mask": "0x80", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "dsa": { 00:18:58.861 "mask": "0x200", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "thread": { 00:18:58.861 "mask": "0x400", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "nvme_pcie": { 00:18:58.861 "mask": "0x800", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "iaa": { 00:18:58.861 "mask": "0x1000", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "nvme_tcp": { 00:18:58.861 "mask": "0x2000", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "bdev_nvme": { 00:18:58.861 "mask": "0x4000", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 }, 00:18:58.861 "sock": { 00:18:58.861 "mask": "0x8000", 00:18:58.861 "tpoint_mask": "0x0" 00:18:58.861 } 00:18:58.861 }' 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:18:58.861 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:18:59.119 ************************************ 00:18:59.119 END TEST rpc_trace_cmd_test 00:18:59.119 ************************************ 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:18:59.119 00:18:59.119 real 0m0.316s 00:18:59.119 user 0m0.285s 00:18:59.119 sys 0m0.024s 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.119 11:12:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.120 11:12:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:18:59.120 11:12:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:18:59.120 11:12:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:18:59.120 11:12:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:59.120 11:12:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:59.120 11:12:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.120 ************************************ 00:18:59.120 START TEST rpc_daemon_integrity 00:18:59.120 ************************************ 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:59.120 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.378 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:59.378 { 00:18:59.378 "name": "Malloc2", 00:18:59.378 "aliases": [ 00:18:59.378 "7d565e85-2b33-4a1a-8710-1810683d01c9" 00:18:59.378 ], 00:18:59.379 "product_name": "Malloc disk", 00:18:59.379 "block_size": 512, 00:18:59.379 "num_blocks": 16384, 00:18:59.379 "uuid": "7d565e85-2b33-4a1a-8710-1810683d01c9", 00:18:59.379 "assigned_rate_limits": { 00:18:59.379 "rw_ios_per_sec": 0, 00:18:59.379 "rw_mbytes_per_sec": 0, 00:18:59.379 "r_mbytes_per_sec": 0, 00:18:59.379 "w_mbytes_per_sec": 0 00:18:59.379 }, 00:18:59.379 "claimed": false, 00:18:59.379 "zoned": false, 00:18:59.379 "supported_io_types": { 00:18:59.379 "read": true, 00:18:59.379 "write": true, 00:18:59.379 "unmap": true, 00:18:59.379 "write_zeroes": true, 00:18:59.379 "flush": true, 00:18:59.379 "reset": true, 00:18:59.379 "compare": false, 00:18:59.379 "compare_and_write": false, 00:18:59.379 "abort": true, 00:18:59.379 "nvme_admin": false, 00:18:59.379 "nvme_io": false 00:18:59.379 }, 00:18:59.379 "memory_domains": [ 00:18:59.379 { 00:18:59.379 "dma_device_id": "system", 00:18:59.379 "dma_device_type": 1 00:18:59.379 }, 00:18:59.379 { 00:18:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.379 "dma_device_type": 2 00:18:59.379 } 00:18:59.379 ], 00:18:59.379 "driver_specific": {} 00:18:59.379 } 00:18:59.379 ]' 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 [2024-05-15 11:12:17.870904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:18:59.379 [2024-05-15 11:12:17.871003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.379 [2024-05-15 11:12:17.871071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:18:59.379 [2024-05-15 11:12:17.871110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.379 [2024-05-15 11:12:17.873043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.379 [2024-05-15 11:12:17.873094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:59.379 Passthru0 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:59.379 { 00:18:59.379 "name": "Malloc2", 00:18:59.379 "aliases": [ 00:18:59.379 "7d565e85-2b33-4a1a-8710-1810683d01c9" 00:18:59.379 ], 00:18:59.379 "product_name": "Malloc disk", 00:18:59.379 "block_size": 512, 00:18:59.379 "num_blocks": 16384, 00:18:59.379 "uuid": "7d565e85-2b33-4a1a-8710-1810683d01c9", 00:18:59.379 "assigned_rate_limits": { 00:18:59.379 "rw_ios_per_sec": 0, 00:18:59.379 "rw_mbytes_per_sec": 0, 00:18:59.379 "r_mbytes_per_sec": 0, 00:18:59.379 "w_mbytes_per_sec": 0 00:18:59.379 }, 00:18:59.379 "claimed": true, 00:18:59.379 "claim_type": "exclusive_write", 00:18:59.379 "zoned": false, 00:18:59.379 "supported_io_types": { 00:18:59.379 "read": true, 00:18:59.379 "write": true, 00:18:59.379 "unmap": true, 00:18:59.379 "write_zeroes": true, 00:18:59.379 "flush": true, 00:18:59.379 "reset": true, 00:18:59.379 "compare": false, 00:18:59.379 "compare_and_write": false, 00:18:59.379 "abort": true, 00:18:59.379 "nvme_admin": false, 00:18:59.379 "nvme_io": false 00:18:59.379 }, 00:18:59.379 "memory_domains": [ 00:18:59.379 { 00:18:59.379 "dma_device_id": "system", 00:18:59.379 "dma_device_type": 1 00:18:59.379 }, 00:18:59.379 { 00:18:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.379 "dma_device_type": 2 00:18:59.379 } 00:18:59.379 ], 00:18:59.379 "driver_specific": {} 00:18:59.379 }, 00:18:59.379 { 00:18:59.379 "name": "Passthru0", 00:18:59.379 "aliases": [ 00:18:59.379 "e517f73d-4ff1-5033-949c-16d713132b20" 00:18:59.379 ], 00:18:59.379 "product_name": "passthru", 00:18:59.379 "block_size": 512, 00:18:59.379 "num_blocks": 16384, 00:18:59.379 "uuid": "e517f73d-4ff1-5033-949c-16d713132b20", 00:18:59.379 "assigned_rate_limits": { 00:18:59.379 "rw_ios_per_sec": 0, 00:18:59.379 "rw_mbytes_per_sec": 0, 00:18:59.379 "r_mbytes_per_sec": 0, 00:18:59.379 "w_mbytes_per_sec": 0 00:18:59.379 }, 00:18:59.379 "claimed": false, 00:18:59.379 "zoned": false, 00:18:59.379 "supported_io_types": { 00:18:59.379 "read": true, 00:18:59.379 "write": true, 00:18:59.379 "unmap": true, 00:18:59.379 "write_zeroes": true, 00:18:59.379 "flush": true, 00:18:59.379 "reset": true, 00:18:59.379 "compare": false, 00:18:59.379 "compare_and_write": false, 00:18:59.379 "abort": true, 00:18:59.379 "nvme_admin": false, 00:18:59.379 "nvme_io": false 00:18:59.379 }, 00:18:59.379 "memory_domains": [ 00:18:59.379 { 00:18:59.379 "dma_device_id": "system", 00:18:59.379 "dma_device_type": 1 00:18:59.379 }, 00:18:59.379 { 00:18:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.379 "dma_device_type": 2 00:18:59.379 } 00:18:59.379 ], 00:18:59.379 "driver_specific": { 00:18:59.379 "passthru": { 00:18:59.379 "name": "Passthru0", 00:18:59.379 "base_bdev_name": "Malloc2" 00:18:59.379 } 00:18:59.379 } 00:18:59.379 } 00:18:59.379 ]' 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.379 11:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 11:12:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.379 11:12:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:59.379 11:12:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:59.638 ************************************ 00:18:59.638 END TEST rpc_daemon_integrity 00:18:59.638 ************************************ 00:18:59.638 11:12:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:59.638 00:18:59.638 real 0m0.368s 00:18:59.638 user 0m0.244s 00:18:59.638 sys 0m0.036s 00:18:59.638 11:12:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.638 11:12:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:59.638 11:12:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:59.638 11:12:18 rpc -- rpc/rpc.sh@84 -- # killprocess 46154 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@946 -- # '[' -z 46154 ']' 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@950 -- # kill -0 46154 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@951 -- # uname 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46154 00:18:59.638 killing process with pid 46154 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46154' 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@965 -- # kill 46154 00:18:59.638 11:12:18 rpc -- common/autotest_common.sh@970 -- # wait 46154 00:19:02.170 ************************************ 00:19:02.170 END TEST rpc 00:19:02.170 ************************************ 00:19:02.170 00:19:02.170 real 0m5.029s 00:19:02.170 user 0m5.749s 00:19:02.170 sys 0m0.737s 00:19:02.170 11:12:20 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:02.170 11:12:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.170 11:12:20 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:19:02.170 11:12:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:02.170 11:12:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:02.170 11:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.170 ************************************ 00:19:02.170 START TEST skip_rpc 00:19:02.170 ************************************ 00:19:02.170 11:12:20 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:19:02.170 * Looking for test storage... 00:19:02.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:19:02.170 11:12:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:02.170 11:12:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:02.170 11:12:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:19:02.170 11:12:20 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:02.170 11:12:20 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:02.170 11:12:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.170 ************************************ 00:19:02.170 START TEST skip_rpc 00:19:02.170 ************************************ 00:19:02.170 11:12:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:19:02.170 11:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46416 00:19:02.170 11:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:02.170 11:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:19:02.170 11:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:19:02.170 [2024-05-15 11:12:20.602427] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:02.171 [2024-05-15 11:12:20.602611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46416 ] 00:19:02.171 [2024-05-15 11:12:20.754967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.429 [2024-05-15 11:12:20.967638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46416 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 46416 ']' 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 46416 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46416 00:19:07.694 killing process with pid 46416 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46416' 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 46416 00:19:07.694 11:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 46416 00:19:09.148 00:19:09.148 real 0m7.243s 00:19:09.148 user 0m6.687s 00:19:09.148 sys 0m0.381s 00:19:09.148 11:12:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.148 11:12:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.148 ************************************ 00:19:09.148 END TEST skip_rpc 00:19:09.148 ************************************ 00:19:09.148 11:12:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:19:09.148 11:12:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:09.148 11:12:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.148 11:12:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.148 ************************************ 00:19:09.148 START TEST skip_rpc_with_json 00:19:09.148 ************************************ 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:19:09.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46538 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46538 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 46538 ']' 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:09.148 11:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 [2024-05-15 11:12:27.898059] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:09.404 [2024-05-15 11:12:27.898240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46538 ] 00:19:09.662 [2024-05-15 11:12:28.051217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.662 [2024-05-15 11:12:28.265421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:10.593 [2024-05-15 11:12:29.076403] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:19:10.593 request: 00:19:10.593 { 00:19:10.593 "trtype": "tcp", 00:19:10.593 "method": "nvmf_get_transports", 00:19:10.593 "req_id": 1 00:19:10.593 } 00:19:10.593 Got JSON-RPC error response 00:19:10.593 response: 00:19:10.593 { 00:19:10.593 "code": -19, 00:19:10.593 "message": "No such device" 00:19:10.593 } 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:10.593 [2024-05-15 11:12:29.088461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.593 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:10.852 { 00:19:10.852 "subsystems": [ 00:19:10.852 { 00:19:10.852 "subsystem": "scheduler", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "framework_set_scheduler", 00:19:10.852 "params": { 00:19:10.852 "name": "static" 00:19:10.852 } 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "vmd", 00:19:10.852 "config": [] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "sock", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "sock_impl_set_options", 00:19:10.852 "params": { 00:19:10.852 "impl_name": "posix", 00:19:10.852 "recv_buf_size": 2097152, 00:19:10.852 "send_buf_size": 2097152, 00:19:10.852 "enable_recv_pipe": true, 00:19:10.852 "enable_quickack": false, 00:19:10.852 "enable_placement_id": 0, 00:19:10.852 "enable_zerocopy_send_server": true, 00:19:10.852 "enable_zerocopy_send_client": false, 00:19:10.852 "zerocopy_threshold": 0, 00:19:10.852 "tls_version": 0, 00:19:10.852 "enable_ktls": false 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "sock_impl_set_options", 00:19:10.852 "params": { 00:19:10.852 "impl_name": "ssl", 00:19:10.852 "recv_buf_size": 4096, 00:19:10.852 "send_buf_size": 4096, 00:19:10.852 "enable_recv_pipe": true, 00:19:10.852 "enable_quickack": false, 00:19:10.852 "enable_placement_id": 0, 00:19:10.852 "enable_zerocopy_send_server": true, 00:19:10.852 "enable_zerocopy_send_client": false, 00:19:10.852 "zerocopy_threshold": 0, 00:19:10.852 "tls_version": 0, 00:19:10.852 "enable_ktls": false 00:19:10.852 } 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "iobuf", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "iobuf_set_options", 00:19:10.852 "params": { 00:19:10.852 "small_pool_count": 8192, 00:19:10.852 "large_pool_count": 1024, 00:19:10.852 "small_bufsize": 8192, 00:19:10.852 "large_bufsize": 135168 00:19:10.852 } 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "keyring", 00:19:10.852 "config": [] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "accel", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "accel_set_options", 00:19:10.852 "params": { 00:19:10.852 "small_cache_size": 128, 00:19:10.852 "large_cache_size": 16, 00:19:10.852 "task_count": 2048, 00:19:10.852 "sequence_count": 2048, 00:19:10.852 "buf_count": 2048 00:19:10.852 } 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "bdev", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "bdev_set_options", 00:19:10.852 "params": { 00:19:10.852 "bdev_io_pool_size": 65535, 00:19:10.852 "bdev_io_cache_size": 256, 00:19:10.852 "bdev_auto_examine": true, 00:19:10.852 "iobuf_small_cache_size": 128, 00:19:10.852 "iobuf_large_cache_size": 16 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "bdev_raid_set_options", 00:19:10.852 "params": { 00:19:10.852 "process_window_size_kb": 1024 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "bdev_nvme_set_options", 00:19:10.852 "params": { 00:19:10.852 "action_on_timeout": "none", 00:19:10.852 "timeout_us": 0, 00:19:10.852 "timeout_admin_us": 0, 00:19:10.852 "keep_alive_timeout_ms": 10000, 00:19:10.852 "arbitration_burst": 0, 00:19:10.852 "low_priority_weight": 0, 00:19:10.852 "medium_priority_weight": 0, 00:19:10.852 "high_priority_weight": 0, 00:19:10.852 "nvme_adminq_poll_period_us": 10000, 00:19:10.852 "nvme_ioq_poll_period_us": 0, 00:19:10.852 "io_queue_requests": 0, 00:19:10.852 "delay_cmd_submit": true, 00:19:10.852 "transport_retry_count": 4, 00:19:10.852 "bdev_retry_count": 3, 00:19:10.852 "transport_ack_timeout": 0, 00:19:10.852 "ctrlr_loss_timeout_sec": 0, 00:19:10.852 "reconnect_delay_sec": 0, 00:19:10.852 "fast_io_fail_timeout_sec": 0, 00:19:10.852 "disable_auto_failback": false, 00:19:10.852 "generate_uuids": false, 00:19:10.852 "transport_tos": 0, 00:19:10.852 "nvme_error_stat": false, 00:19:10.852 "rdma_srq_size": 0, 00:19:10.852 "io_path_stat": false, 00:19:10.852 "allow_accel_sequence": false, 00:19:10.852 "rdma_max_cq_size": 0, 00:19:10.852 "rdma_cm_event_timeout_ms": 0, 00:19:10.852 "dhchap_digests": [ 00:19:10.852 "sha256", 00:19:10.852 "sha384", 00:19:10.852 "sha512" 00:19:10.852 ], 00:19:10.852 "dhchap_dhgroups": [ 00:19:10.852 "null", 00:19:10.852 "ffdhe2048", 00:19:10.852 "ffdhe3072", 00:19:10.852 "ffdhe4096", 00:19:10.852 "ffdhe6144", 00:19:10.852 "ffdhe8192" 00:19:10.852 ] 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "bdev_nvme_set_hotplug", 00:19:10.852 "params": { 00:19:10.852 "period_us": 100000, 00:19:10.852 "enable": false 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "bdev_wait_for_examine" 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "subsystem": "nvmf", 00:19:10.852 "config": [ 00:19:10.852 { 00:19:10.852 "method": "nvmf_set_config", 00:19:10.852 "params": { 00:19:10.852 "discovery_filter": "match_any", 00:19:10.852 "admin_cmd_passthru": { 00:19:10.852 "identify_ctrlr": false 00:19:10.852 } 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "nvmf_set_max_subsystems", 00:19:10.852 "params": { 00:19:10.852 "max_subsystems": 1024 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "nvmf_set_crdt", 00:19:10.852 "params": { 00:19:10.852 "crdt1": 0, 00:19:10.852 "crdt2": 0, 00:19:10.852 "crdt3": 0 00:19:10.852 } 00:19:10.852 }, 00:19:10.852 { 00:19:10.852 "method": "nvmf_create_transport", 00:19:10.852 "params": { 00:19:10.852 "trtype": "TCP", 00:19:10.852 "max_queue_depth": 128, 00:19:10.852 "max_io_qpairs_per_ctrlr": 127, 00:19:10.852 "in_capsule_data_size": 4096, 00:19:10.852 "max_io_size": 131072, 00:19:10.852 "io_unit_size": 131072, 00:19:10.852 "max_aq_depth": 128, 00:19:10.852 "num_shared_buffers": 511, 00:19:10.852 "buf_cache_size": 4294967295, 00:19:10.852 "dif_insert_or_strip": false, 00:19:10.852 "zcopy": false, 00:19:10.852 "c2h_success": true, 00:19:10.852 "sock_priority": 0, 00:19:10.852 "abort_timeout_sec": 1, 00:19:10.852 "ack_timeout": 0, 00:19:10.852 "data_wr_pool_size": 0 00:19:10.852 } 00:19:10.852 } 00:19:10.852 ] 00:19:10.852 }, 00:19:10.853 { 00:19:10.853 "subsystem": "nbd", 00:19:10.853 "config": [] 00:19:10.853 }, 00:19:10.853 { 00:19:10.853 "subsystem": "vhost_blk", 00:19:10.853 "config": [] 00:19:10.853 }, 00:19:10.853 { 00:19:10.853 "subsystem": "scsi", 00:19:10.853 "config": null 00:19:10.853 }, 00:19:10.853 { 00:19:10.853 "subsystem": "iscsi", 00:19:10.853 "config": [ 00:19:10.853 { 00:19:10.853 "method": "iscsi_set_options", 00:19:10.853 "params": { 00:19:10.853 "node_base": "iqn.2016-06.io.spdk", 00:19:10.853 "max_sessions": 128, 00:19:10.853 "max_connections_per_session": 2, 00:19:10.853 "max_queue_depth": 64, 00:19:10.853 "default_time2wait": 2, 00:19:10.853 "default_time2retain": 20, 00:19:10.853 "first_burst_length": 8192, 00:19:10.853 "immediate_data": true, 00:19:10.853 "allow_duplicated_isid": false, 00:19:10.853 "error_recovery_level": 0, 00:19:10.853 "nop_timeout": 60, 00:19:10.853 "nop_in_interval": 30, 00:19:10.853 "disable_chap": false, 00:19:10.853 "require_chap": false, 00:19:10.853 "mutual_chap": false, 00:19:10.853 "chap_group": 0, 00:19:10.853 "max_large_datain_per_connection": 64, 00:19:10.853 "max_r2t_per_connection": 4, 00:19:10.853 "pdu_pool_size": 36864, 00:19:10.853 "immediate_data_pool_size": 16384, 00:19:10.853 "data_out_pool_size": 2048 00:19:10.853 } 00:19:10.853 } 00:19:10.853 ] 00:19:10.853 }, 00:19:10.853 { 00:19:10.853 "subsystem": "vhost_scsi", 00:19:10.853 "config": [] 00:19:10.853 } 00:19:10.853 ] 00:19:10.853 } 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46538 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46538 ']' 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46538 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46538 00:19:10.853 killing process with pid 46538 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46538' 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46538 00:19:10.853 11:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46538 00:19:13.378 11:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46601 00:19:13.378 11:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:19:13.378 11:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46601 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46601 ']' 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46601 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46601 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:18.651 killing process with pid 46601 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46601' 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46601 00:19:18.651 11:12:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46601 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:20.621 ************************************ 00:19:20.621 END TEST skip_rpc_with_json 00:19:20.621 ************************************ 00:19:20.621 00:19:20.621 real 0m10.985s 00:19:20.621 user 0m10.298s 00:19:20.621 sys 0m0.826s 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 11:12:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:19:20.621 11:12:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:20.621 11:12:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.621 11:12:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 ************************************ 00:19:20.621 START TEST skip_rpc_with_delay 00:19:20.621 ************************************ 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:20.621 [2024-05-15 11:12:38.935916] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:19:20.621 [2024-05-15 11:12:38.936181] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.621 ************************************ 00:19:20.621 END TEST skip_rpc_with_delay 00:19:20.621 ************************************ 00:19:20.621 00:19:20.621 real 0m0.178s 00:19:20.621 user 0m0.040s 00:19:20.621 sys 0m0.042s 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.621 11:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 11:12:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:19:20.621 11:12:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:19:20.621 11:12:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:19:20.621 11:12:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:20.621 11:12:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.621 11:12:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 ************************************ 00:19:20.621 START TEST exit_on_failed_rpc_init 00:19:20.621 ************************************ 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=46748 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 46748 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 46748 ']' 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.621 11:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 [2024-05-15 11:12:39.159091] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:20.621 [2024-05-15 11:12:39.159307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46748 ] 00:19:20.880 [2024-05-15 11:12:39.321137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.139 [2024-05-15 11:12:39.554286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:19:22.074 11:12:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:22.074 [2024-05-15 11:12:40.536859] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:22.074 [2024-05-15 11:12:40.537074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46777 ] 00:19:22.074 [2024-05-15 11:12:40.702515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.332 [2024-05-15 11:12:40.951038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.332 [2024-05-15 11:12:40.951173] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:22.332 [2024-05-15 11:12:40.951217] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:22.332 [2024-05-15 11:12:40.951247] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 46748 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 46748 ']' 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 46748 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46748 00:19:22.898 killing process with pid 46748 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46748' 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 46748 00:19:22.898 11:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 46748 00:19:25.426 ************************************ 00:19:25.426 END TEST exit_on_failed_rpc_init 00:19:25.426 ************************************ 00:19:25.426 00:19:25.426 real 0m4.658s 00:19:25.426 user 0m5.097s 00:19:25.426 sys 0m0.572s 00:19:25.426 11:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.426 11:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:19:25.426 11:12:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:25.426 ************************************ 00:19:25.426 END TEST skip_rpc 00:19:25.426 ************************************ 00:19:25.426 00:19:25.426 real 0m23.352s 00:19:25.426 user 0m22.228s 00:19:25.426 sys 0m1.979s 00:19:25.426 11:12:43 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.426 11:12:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.426 11:12:43 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:19:25.426 11:12:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:25.426 11:12:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:25.426 11:12:43 -- common/autotest_common.sh@10 -- # set +x 00:19:25.426 ************************************ 00:19:25.426 START TEST rpc_client 00:19:25.426 ************************************ 00:19:25.426 11:12:43 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:19:25.426 * Looking for test storage... 00:19:25.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:19:25.426 11:12:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:19:25.426 OK 00:19:25.426 11:12:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:19:25.426 00:19:25.426 real 0m0.230s 00:19:25.426 user 0m0.082s 00:19:25.426 sys 0m0.061s 00:19:25.426 ************************************ 00:19:25.426 END TEST rpc_client 00:19:25.426 ************************************ 00:19:25.426 11:12:43 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.426 11:12:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:19:25.426 11:12:44 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:19:25.426 11:12:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:25.427 11:12:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:25.427 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:19:25.427 ************************************ 00:19:25.427 START TEST json_config 00:19:25.427 ************************************ 00:19:25.427 11:12:44 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:19:25.685 11:12:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.685 11:12:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:961aa868-899d-4a6d-8c67-e04358159924 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=961aa868-899d-4a6d-8c67-e04358159924 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.686 11:12:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.686 11:12:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.686 11:12:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.686 11:12:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:25.686 11:12:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:25.686 11:12:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:25.686 11:12:44 json_config -- paths/export.sh@5 -- # export PATH 00:19:25.686 11:12:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@47 -- # : 0 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.686 11:12:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:19:25.686 INFO: JSON configuration test init 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:25.686 Waiting for target to run... 00:19:25.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:19:25.686 11:12:44 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:19:25.686 11:12:44 json_config -- json_config/common.sh@9 -- # local app=target 00:19:25.686 11:12:44 json_config -- json_config/common.sh@10 -- # shift 00:19:25.686 11:12:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:19:25.686 11:12:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:19:25.686 11:12:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:19:25.686 11:12:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:25.686 11:12:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:25.686 11:12:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46955 00:19:25.686 11:12:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:19:25.686 11:12:44 json_config -- json_config/common.sh@25 -- # waitforlisten 46955 /var/tmp/spdk_tgt.sock 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@827 -- # '[' -z 46955 ']' 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:25.686 11:12:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:25.686 11:12:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:19:25.686 [2024-05-15 11:12:44.258919] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:25.686 [2024-05-15 11:12:44.259099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46955 ] 00:19:26.253 [2024-05-15 11:12:44.693956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.511 [2024-05-15 11:12:44.903150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.512 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@860 -- # return 0 00:19:26.512 11:12:45 json_config -- json_config/common.sh@26 -- # echo '' 00:19:26.512 11:12:45 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:19:26.512 11:12:45 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:26.512 11:12:45 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:19:26.512 11:12:45 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.512 11:12:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:26.512 11:12:45 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:19:26.770 11:12:45 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:19:26.770 11:12:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:19:27.704 11:12:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:27.704 11:12:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:19:27.704 11:12:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:19:27.704 11:12:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:19:27.704 11:12:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.704 11:12:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:27.962 11:12:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:19:27.962 11:12:46 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:19:27.962 11:12:46 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:19:27.962 11:12:46 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:19:27.962 11:12:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:27.962 11:12:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:27.962 11:12:46 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:19:27.963 11:12:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:19:27.963 11:12:46 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:19:27.963 11:12:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:19:28.221 Nvme0n1p0 Nvme0n1p1 00:19:28.221 11:12:46 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:19:28.221 11:12:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:19:28.480 [2024-05-15 11:12:46.974356] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:19:28.480 [2024-05-15 11:12:46.974491] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:19:28.480 00:19:28.480 11:12:46 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:19:28.480 11:12:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:19:28.738 Malloc3 00:19:28.738 11:12:47 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:19:28.738 11:12:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:19:28.997 [2024-05-15 11:12:47.414669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:19:28.997 [2024-05-15 11:12:47.414809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.997 [2024-05-15 11:12:47.415063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036080 00:19:28.997 [2024-05-15 11:12:47.415095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.997 [2024-05-15 11:12:47.416896] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.997 [2024-05-15 11:12:47.416958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:19:28.997 PTBdevFromMalloc3 00:19:28.997 11:12:47 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:19:28.997 11:12:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:19:28.997 Null0 00:19:28.997 11:12:47 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:19:28.997 11:12:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:19:29.256 Malloc0 00:19:29.256 11:12:47 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:19:29.256 11:12:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:19:29.515 Malloc1 00:19:29.515 11:12:48 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:19:29.515 11:12:48 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:19:29.774 102400+0 records in 00:19:29.774 102400+0 records out 00:19:29.774 104857600 bytes (105 MB) copied, 0.310876 s, 337 MB/s 00:19:29.774 11:12:48 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:19:29.774 11:12:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:19:30.032 aio_disk 00:19:30.032 11:12:48 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:19:30.032 11:12:48 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:19:30.032 11:12:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:19:30.290 b2a0c54b-c72a-459c-b049-d9179ddbf901 00:19:30.290 11:12:48 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:19:30.290 11:12:48 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:19:30.290 11:12:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:19:30.549 11:12:48 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:19:30.549 11:12:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:19:30.549 11:12:49 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:19:30.549 11:12:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:19:30.807 11:12:49 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:19:30.807 11:12:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@71 -- # sort 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@72 -- # sort 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:19:31.065 11:12:49 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:19:31.065 11:12:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.323 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\2\b\7\1\c\5\3\-\5\d\0\4\-\4\5\b\d\-\8\7\9\7\-\1\7\5\1\4\6\c\2\2\e\1\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\0\6\f\6\c\2\3\-\4\a\e\e\-\4\5\5\c\-\a\e\8\6\-\b\c\8\9\f\3\a\9\8\8\a\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\5\d\7\2\f\3\5\-\b\6\6\7\-\4\6\4\2\-\8\c\a\4\-\b\2\6\f\6\0\8\4\4\4\f\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\e\0\1\1\8\f\e\-\b\d\4\d\-\4\1\a\c\-\a\f\5\5\-\0\7\2\a\2\7\8\f\c\8\f\c ]] 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@86 -- # cat 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc 00:19:31.324 Expected events matched: 00:19:31.324 bdev_register:72b71c53-5d04-45bd-8797-175146c22e15 00:19:31.324 bdev_register:Malloc0 00:19:31.324 bdev_register:Malloc0p0 00:19:31.324 bdev_register:Malloc0p1 00:19:31.324 bdev_register:Malloc0p2 00:19:31.324 bdev_register:Malloc1 00:19:31.324 bdev_register:Malloc3 00:19:31.324 bdev_register:Null0 00:19:31.324 bdev_register:Nvme0n1 00:19:31.324 bdev_register:Nvme0n1p0 00:19:31.324 bdev_register:Nvme0n1p1 00:19:31.324 bdev_register:PTBdevFromMalloc3 00:19:31.324 bdev_register:aio_disk 00:19:31.324 bdev_register:e06f6c23-4aee-455c-ae86-bc89f3a988a8 00:19:31.324 bdev_register:e5d72f35-b667-4642-8ca4-b26f608444f1 00:19:31.324 bdev_register:fe0118fe-bd4d-41ac-af55-072a278fc8fc 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:19:31.324 11:12:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.324 11:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:19:31.324 11:12:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.324 11:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:19:31.324 11:12:49 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:19:31.324 11:12:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:19:31.582 MallocBdevForConfigChangeCheck 00:19:31.582 11:12:50 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:19:31.582 11:12:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.582 11:12:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:31.582 11:12:50 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:19:31.582 11:12:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:19:32.148 INFO: shutting down applications... 00:19:32.148 11:12:50 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:19:32.148 11:12:50 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:19:32.148 11:12:50 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:19:32.148 11:12:50 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:19:32.148 11:12:50 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:19:32.148 [2024-05-15 11:12:50.745345] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:19:32.406 Calling clear_vhost_scsi_subsystem 00:19:32.406 Calling clear_iscsi_subsystem 00:19:32.406 Calling clear_vhost_blk_subsystem 00:19:32.406 Calling clear_nbd_subsystem 00:19:32.406 Calling clear_nvmf_subsystem 00:19:32.406 Calling clear_bdev_subsystem 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:19:32.406 11:12:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:19:32.973 11:12:51 json_config -- json_config/json_config.sh@345 -- # break 00:19:32.973 11:12:51 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:19:32.973 11:12:51 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:19:32.973 11:12:51 json_config -- json_config/common.sh@31 -- # local app=target 00:19:32.973 11:12:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:19:32.973 11:12:51 json_config -- json_config/common.sh@35 -- # [[ -n 46955 ]] 00:19:32.973 11:12:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46955 00:19:32.973 11:12:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:19:32.973 11:12:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:32.973 11:12:51 json_config -- json_config/common.sh@41 -- # kill -0 46955 00:19:32.973 11:12:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:19:33.233 11:12:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:19:33.233 11:12:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:33.233 11:12:51 json_config -- json_config/common.sh@41 -- # kill -0 46955 00:19:33.233 11:12:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:19:33.800 11:12:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:19:33.800 11:12:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:33.800 11:12:52 json_config -- json_config/common.sh@41 -- # kill -0 46955 00:19:33.800 SPDK target shutdown done 00:19:33.800 INFO: relaunching applications... 00:19:33.800 Waiting for target to run... 00:19:33.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:19:33.800 11:12:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:19:33.800 11:12:52 json_config -- json_config/common.sh@43 -- # break 00:19:33.800 11:12:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:19:33.800 11:12:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:19:33.800 11:12:52 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:19:33.800 11:12:52 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:33.800 11:12:52 json_config -- json_config/common.sh@9 -- # local app=target 00:19:33.800 11:12:52 json_config -- json_config/common.sh@10 -- # shift 00:19:33.800 11:12:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:19:33.800 11:12:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:19:33.800 11:12:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:19:33.800 11:12:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:33.800 11:12:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:33.800 11:12:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=47216 00:19:33.800 11:12:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:19:33.800 11:12:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:33.800 11:12:52 json_config -- json_config/common.sh@25 -- # waitforlisten 47216 /var/tmp/spdk_tgt.sock 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@827 -- # '[' -z 47216 ']' 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:33.800 11:12:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:34.058 [2024-05-15 11:12:52.461455] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:34.058 [2024-05-15 11:12:52.461693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47216 ] 00:19:34.317 [2024-05-15 11:12:52.887481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.575 [2024-05-15 11:12:53.076103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.510 [2024-05-15 11:12:53.776911] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:19:35.510 [2024-05-15 11:12:53.777033] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:19:35.510 [2024-05-15 11:12:53.784853] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:19:35.510 [2024-05-15 11:12:53.785076] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:19:35.510 [2024-05-15 11:12:53.792933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:19:35.510 [2024-05-15 11:12:53.792991] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:19:35.510 [2024-05-15 11:12:53.793037] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:19:35.510 [2024-05-15 11:12:53.880601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:19:35.510 [2024-05-15 11:12:53.880700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.510 [2024-05-15 11:12:53.880738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:19:35.510 [2024-05-15 11:12:53.880769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.510 [2024-05-15 11:12:53.881404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.510 [2024-05-15 11:12:53.881440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:19:35.510 00:19:35.510 INFO: Checking if target configuration is the same... 00:19:35.510 11:12:53 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:35.510 11:12:53 json_config -- common/autotest_common.sh@860 -- # return 0 00:19:35.510 11:12:53 json_config -- json_config/common.sh@26 -- # echo '' 00:19:35.510 11:12:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:19:35.510 11:12:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:19:35.510 11:12:53 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:35.510 11:12:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:19:35.510 11:12:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:19:35.510 + '[' 2 -ne 2 ']' 00:19:35.510 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:19:35.510 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:19:35.510 + rootdir=/home/vagrant/spdk_repo/spdk 00:19:35.510 +++ basename /dev/fd/62 00:19:35.510 ++ mktemp /tmp/62.XXX 00:19:35.510 + tmp_file_1=/tmp/62.zTq 00:19:35.510 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:35.510 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:19:35.510 + tmp_file_2=/tmp/spdk_tgt_config.json.PWY 00:19:35.510 + ret=0 00:19:35.510 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:19:35.768 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:19:36.027 + diff -u /tmp/62.zTq /tmp/spdk_tgt_config.json.PWY 00:19:36.027 INFO: JSON config files are the same 00:19:36.027 + echo 'INFO: JSON config files are the same' 00:19:36.027 + rm /tmp/62.zTq /tmp/spdk_tgt_config.json.PWY 00:19:36.027 + exit 0 00:19:36.027 INFO: changing configuration and checking if this can be detected... 00:19:36.027 11:12:54 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:19:36.027 11:12:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:19:36.027 11:12:54 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:19:36.027 11:12:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:19:36.285 11:12:54 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:36.285 11:12:54 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:19:36.285 11:12:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:19:36.285 + '[' 2 -ne 2 ']' 00:19:36.285 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:19:36.285 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:19:36.285 + rootdir=/home/vagrant/spdk_repo/spdk 00:19:36.285 +++ basename /dev/fd/62 00:19:36.285 ++ mktemp /tmp/62.XXX 00:19:36.285 + tmp_file_1=/tmp/62.tlJ 00:19:36.285 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:36.285 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:19:36.285 + tmp_file_2=/tmp/spdk_tgt_config.json.4wQ 00:19:36.285 + ret=0 00:19:36.285 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:19:36.583 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:19:36.583 + diff -u /tmp/62.tlJ /tmp/spdk_tgt_config.json.4wQ 00:19:36.583 + ret=1 00:19:36.583 + echo '=== Start of file: /tmp/62.tlJ ===' 00:19:36.583 + cat /tmp/62.tlJ 00:19:36.583 + echo '=== End of file: /tmp/62.tlJ ===' 00:19:36.583 + echo '' 00:19:36.583 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4wQ ===' 00:19:36.583 + cat /tmp/spdk_tgt_config.json.4wQ 00:19:36.583 + echo '=== End of file: /tmp/spdk_tgt_config.json.4wQ ===' 00:19:36.583 + echo '' 00:19:36.583 + rm /tmp/62.tlJ /tmp/spdk_tgt_config.json.4wQ 00:19:36.583 + exit 1 00:19:36.583 INFO: configuration change detected. 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:19:36.583 11:12:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:36.583 11:12:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@317 -- # [[ -n 47216 ]] 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:19:36.583 11:12:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:36.583 11:12:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:19:36.583 11:12:55 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:19:36.583 11:12:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:19:36.840 11:12:55 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:19:36.840 11:12:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:19:37.098 11:12:55 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:19:37.098 11:12:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:19:37.356 11:12:55 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:19:37.356 11:12:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@193 -- # uname -s 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:37.615 11:12:56 json_config -- json_config/json_config.sh@323 -- # killprocess 47216 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@946 -- # '[' -z 47216 ']' 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@950 -- # kill -0 47216 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@951 -- # uname 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47216 00:19:37.615 killing process with pid 47216 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47216' 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@965 -- # kill 47216 00:19:37.615 11:12:56 json_config -- common/autotest_common.sh@970 -- # wait 47216 00:19:38.549 11:12:57 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:19:38.549 11:12:57 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:19:38.549 11:12:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.549 11:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:38.549 INFO: Success 00:19:38.549 11:12:57 json_config -- json_config/json_config.sh@328 -- # return 0 00:19:38.549 11:12:57 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:19:38.549 00:19:38.549 real 0m13.138s 00:19:38.549 user 0m18.684s 00:19:38.549 sys 0m2.191s 00:19:38.549 ************************************ 00:19:38.549 END TEST json_config 00:19:38.549 ************************************ 00:19:38.549 11:12:57 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:38.549 11:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:38.807 11:12:57 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:19:38.807 11:12:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:38.807 11:12:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:38.807 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:19:38.807 ************************************ 00:19:38.807 START TEST json_config_extra_key 00:19:38.807 ************************************ 00:19:38.807 11:12:57 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e366695e-fdac-4308-a357-68aa2793fc57 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e366695e-fdac-4308-a357-68aa2793fc57 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.807 11:12:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.807 11:12:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.807 11:12:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.807 11:12:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:38.807 11:12:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:38.807 11:12:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:38.807 11:12:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:19:38.807 11:12:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.807 11:12:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.807 INFO: launching applications... 00:19:38.807 Waiting for target to run... 00:19:38.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:19:38.807 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:19:38.808 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:19:38.808 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:19:38.808 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:19:38.808 11:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=47417 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 47417 /var/tmp/spdk_tgt.sock 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 47417 ']' 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:19:38.808 11:12:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:38.808 11:12:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:19:39.065 [2024-05-15 11:12:57.446517] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:39.065 [2024-05-15 11:12:57.446696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47417 ] 00:19:39.323 [2024-05-15 11:12:57.864293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.582 [2024-05-15 11:12:58.056184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.179 00:19:40.179 INFO: shutting down applications... 00:19:40.179 11:12:58 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:40.179 11:12:58 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:19:40.179 11:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:19:40.179 11:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 47417 ]] 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 47417 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:40.179 11:12:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:40.744 11:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:40.744 11:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:40.744 11:12:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:40.744 11:12:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:41.309 11:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:41.309 11:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:41.309 11:12:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:41.309 11:12:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:41.875 11:13:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:41.875 11:13:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:41.875 11:13:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:41.875 11:13:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:42.132 11:13:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:42.132 11:13:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:42.132 11:13:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:42.132 11:13:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:42.698 SPDK target shutdown done 00:19:42.698 Success 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47417 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:19:42.698 11:13:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:19:42.698 11:13:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:19:42.698 00:19:42.698 real 0m3.991s 00:19:42.698 user 0m3.760s 00:19:42.698 sys 0m0.524s 00:19:42.698 11:13:01 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:42.698 11:13:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:19:42.698 ************************************ 00:19:42.698 END TEST json_config_extra_key 00:19:42.698 ************************************ 00:19:42.698 11:13:01 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:42.698 11:13:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:42.698 11:13:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.698 11:13:01 -- common/autotest_common.sh@10 -- # set +x 00:19:42.698 ************************************ 00:19:42.698 START TEST alias_rpc 00:19:42.698 ************************************ 00:19:42.698 11:13:01 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:42.956 * Looking for test storage... 00:19:42.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:19:42.956 11:13:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:42.956 11:13:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=47529 00:19:42.956 11:13:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 47529 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 47529 ']' 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.956 11:13:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.956 11:13:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.956 [2024-05-15 11:13:01.483913] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:42.956 [2024-05-15 11:13:01.484093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47529 ] 00:19:43.214 [2024-05-15 11:13:01.644001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.471 [2024-05-15 11:13:01.863629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:19:44.401 11:13:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:19:44.401 11:13:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 47529 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 47529 ']' 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 47529 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47529 00:19:44.401 killing process with pid 47529 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47529' 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@965 -- # kill 47529 00:19:44.401 11:13:02 alias_rpc -- common/autotest_common.sh@970 -- # wait 47529 00:19:46.934 ************************************ 00:19:46.934 END TEST alias_rpc 00:19:46.934 ************************************ 00:19:46.934 00:19:46.934 real 0m3.884s 00:19:46.934 user 0m3.898s 00:19:46.934 sys 0m0.483s 00:19:46.934 11:13:05 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:46.934 11:13:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.934 11:13:05 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:19:46.934 11:13:05 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:46.934 11:13:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:46.934 11:13:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:46.934 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:19:46.934 ************************************ 00:19:46.934 START TEST spdkcli_tcp 00:19:46.934 ************************************ 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:46.934 * Looking for test storage... 00:19:46.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=47652 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 47652 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 47652 ']' 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.934 11:13:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.934 11:13:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.934 [2024-05-15 11:13:05.418182] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:46.934 [2024-05-15 11:13:05.418361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47652 ] 00:19:47.193 [2024-05-15 11:13:05.596420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:47.193 [2024-05-15 11:13:05.808999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.193 [2024-05-15 11:13:05.809007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.125 11:13:06 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:48.125 11:13:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:19:48.125 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=47674 00:19:48.125 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:19:48.125 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:19:48.383 [ 00:19:48.383 "spdk_get_version", 00:19:48.383 "rpc_get_methods", 00:19:48.383 "keyring_get_keys", 00:19:48.383 "trace_get_info", 00:19:48.383 "trace_get_tpoint_group_mask", 00:19:48.383 "trace_disable_tpoint_group", 00:19:48.383 "trace_enable_tpoint_group", 00:19:48.383 "trace_clear_tpoint_mask", 00:19:48.383 "trace_set_tpoint_mask", 00:19:48.383 "framework_get_pci_devices", 00:19:48.383 "framework_get_config", 00:19:48.383 "framework_get_subsystems", 00:19:48.384 "iobuf_get_stats", 00:19:48.384 "iobuf_set_options", 00:19:48.384 "sock_get_default_impl", 00:19:48.384 "sock_set_default_impl", 00:19:48.384 "sock_impl_set_options", 00:19:48.384 "sock_impl_get_options", 00:19:48.384 "vmd_rescan", 00:19:48.384 "vmd_remove_device", 00:19:48.384 "vmd_enable", 00:19:48.384 "accel_get_stats", 00:19:48.384 "accel_set_options", 00:19:48.384 "accel_set_driver", 00:19:48.384 "accel_crypto_key_destroy", 00:19:48.384 "accel_crypto_keys_get", 00:19:48.384 "accel_crypto_key_create", 00:19:48.384 "accel_assign_opc", 00:19:48.384 "accel_get_module_info", 00:19:48.384 "accel_get_opc_assignments", 00:19:48.384 "notify_get_notifications", 00:19:48.384 "notify_get_types", 00:19:48.384 "bdev_get_histogram", 00:19:48.384 "bdev_enable_histogram", 00:19:48.384 "bdev_set_qos_limit", 00:19:48.384 "bdev_set_qd_sampling_period", 00:19:48.384 "bdev_get_bdevs", 00:19:48.384 "bdev_reset_iostat", 00:19:48.384 "bdev_get_iostat", 00:19:48.384 "bdev_examine", 00:19:48.384 "bdev_wait_for_examine", 00:19:48.384 "bdev_set_options", 00:19:48.384 "scsi_get_devices", 00:19:48.384 "thread_set_cpumask", 00:19:48.384 "framework_get_scheduler", 00:19:48.384 "framework_set_scheduler", 00:19:48.384 "framework_get_reactors", 00:19:48.384 "thread_get_io_channels", 00:19:48.384 "thread_get_pollers", 00:19:48.384 "thread_get_stats", 00:19:48.384 "framework_monitor_context_switch", 00:19:48.384 "spdk_kill_instance", 00:19:48.384 "log_enable_timestamps", 00:19:48.384 "log_get_flags", 00:19:48.384 "log_clear_flag", 00:19:48.384 "log_set_flag", 00:19:48.384 "log_get_level", 00:19:48.384 "log_set_level", 00:19:48.384 "log_get_print_level", 00:19:48.384 "log_set_print_level", 00:19:48.384 "framework_enable_cpumask_locks", 00:19:48.384 "framework_disable_cpumask_locks", 00:19:48.384 "framework_wait_init", 00:19:48.384 "framework_start_init", 00:19:48.384 "virtio_blk_create_transport", 00:19:48.384 "virtio_blk_get_transports", 00:19:48.384 "vhost_controller_set_coalescing", 00:19:48.384 "vhost_get_controllers", 00:19:48.384 "vhost_delete_controller", 00:19:48.384 "vhost_create_blk_controller", 00:19:48.384 "vhost_scsi_controller_remove_target", 00:19:48.384 "vhost_scsi_controller_add_target", 00:19:48.384 "vhost_start_scsi_controller", 00:19:48.384 "vhost_create_scsi_controller", 00:19:48.384 "nbd_get_disks", 00:19:48.384 "nbd_stop_disk", 00:19:48.384 "nbd_start_disk", 00:19:48.384 "env_dpdk_get_mem_stats", 00:19:48.384 "nvmf_subsystem_get_listeners", 00:19:48.384 "nvmf_subsystem_get_qpairs", 00:19:48.384 "nvmf_subsystem_get_controllers", 00:19:48.384 "nvmf_get_stats", 00:19:48.384 "nvmf_get_transports", 00:19:48.384 "nvmf_create_transport", 00:19:48.384 "nvmf_get_targets", 00:19:48.384 "nvmf_delete_target", 00:19:48.384 "nvmf_create_target", 00:19:48.384 "nvmf_subsystem_allow_any_host", 00:19:48.384 "nvmf_subsystem_remove_host", 00:19:48.384 "nvmf_subsystem_add_host", 00:19:48.384 "nvmf_ns_remove_host", 00:19:48.384 "nvmf_ns_add_host", 00:19:48.384 "nvmf_subsystem_remove_ns", 00:19:48.384 "nvmf_subsystem_add_ns", 00:19:48.384 "nvmf_subsystem_listener_set_ana_state", 00:19:48.384 "nvmf_discovery_get_referrals", 00:19:48.384 "nvmf_discovery_remove_referral", 00:19:48.384 "nvmf_discovery_add_referral", 00:19:48.384 "nvmf_subsystem_remove_listener", 00:19:48.384 "nvmf_subsystem_add_listener", 00:19:48.384 "nvmf_delete_subsystem", 00:19:48.384 "nvmf_create_subsystem", 00:19:48.384 "nvmf_get_subsystems", 00:19:48.384 "nvmf_set_crdt", 00:19:48.384 "nvmf_set_config", 00:19:48.384 "nvmf_set_max_subsystems", 00:19:48.384 "iscsi_get_histogram", 00:19:48.384 "iscsi_enable_histogram", 00:19:48.384 "iscsi_set_options", 00:19:48.384 "iscsi_get_auth_groups", 00:19:48.384 "iscsi_auth_group_remove_secret", 00:19:48.384 "iscsi_auth_group_add_secret", 00:19:48.384 "iscsi_delete_auth_group", 00:19:48.384 "iscsi_create_auth_group", 00:19:48.384 "iscsi_set_discovery_auth", 00:19:48.384 "iscsi_get_options", 00:19:48.384 "iscsi_target_node_request_logout", 00:19:48.384 "iscsi_target_node_set_redirect", 00:19:48.384 "iscsi_target_node_set_auth", 00:19:48.384 "iscsi_target_node_add_lun", 00:19:48.384 "iscsi_get_stats", 00:19:48.384 "iscsi_get_connections", 00:19:48.384 "iscsi_portal_group_set_auth", 00:19:48.384 "iscsi_start_portal_group", 00:19:48.384 "iscsi_delete_portal_group", 00:19:48.384 "iscsi_create_portal_group", 00:19:48.384 "iscsi_get_portal_groups", 00:19:48.384 "iscsi_delete_target_node", 00:19:48.384 "iscsi_target_node_remove_pg_ig_maps", 00:19:48.384 "iscsi_target_node_add_pg_ig_maps", 00:19:48.384 "iscsi_create_target_node", 00:19:48.384 "iscsi_get_target_nodes", 00:19:48.384 "iscsi_delete_initiator_group", 00:19:48.384 "iscsi_initiator_group_remove_initiators", 00:19:48.384 "iscsi_initiator_group_add_initiators", 00:19:48.384 "iscsi_create_initiator_group", 00:19:48.384 "iscsi_get_initiator_groups", 00:19:48.384 "keyring_file_remove_key", 00:19:48.384 "keyring_file_add_key", 00:19:48.384 "iaa_scan_accel_module", 00:19:48.384 "dsa_scan_accel_module", 00:19:48.384 "ioat_scan_accel_module", 00:19:48.384 "accel_error_inject_error", 00:19:48.384 "bdev_daos_resize", 00:19:48.384 "bdev_daos_delete", 00:19:48.384 "bdev_daos_create", 00:19:48.384 "bdev_virtio_attach_controller", 00:19:48.384 "bdev_virtio_scsi_get_devices", 00:19:48.384 "bdev_virtio_detach_controller", 00:19:48.384 "bdev_virtio_blk_set_hotplug", 00:19:48.384 "bdev_ftl_set_property", 00:19:48.384 "bdev_ftl_get_properties", 00:19:48.384 "bdev_ftl_get_stats", 00:19:48.384 "bdev_ftl_unmap", 00:19:48.384 "bdev_ftl_unload", 00:19:48.384 "bdev_ftl_delete", 00:19:48.384 "bdev_ftl_load", 00:19:48.384 "bdev_ftl_create", 00:19:48.384 "bdev_aio_delete", 00:19:48.384 "bdev_aio_rescan", 00:19:48.384 "bdev_aio_create", 00:19:48.384 "blobfs_create", 00:19:48.384 "blobfs_detect", 00:19:48.384 "blobfs_set_cache_size", 00:19:48.384 "bdev_zone_block_delete", 00:19:48.384 "bdev_zone_block_create", 00:19:48.384 "bdev_delay_delete", 00:19:48.384 "bdev_delay_create", 00:19:48.384 "bdev_delay_update_latency", 00:19:48.384 "bdev_split_delete", 00:19:48.384 "bdev_split_create", 00:19:48.384 "bdev_error_inject_error", 00:19:48.384 "bdev_error_delete", 00:19:48.384 "bdev_error_create", 00:19:48.384 "bdev_raid_set_options", 00:19:48.384 "bdev_raid_remove_base_bdev", 00:19:48.384 "bdev_raid_add_base_bdev", 00:19:48.384 "bdev_raid_delete", 00:19:48.384 "bdev_raid_create", 00:19:48.384 "bdev_raid_get_bdevs", 00:19:48.384 "bdev_lvol_check_shallow_copy", 00:19:48.384 "bdev_lvol_start_shallow_copy", 00:19:48.384 "bdev_lvol_grow_lvstore", 00:19:48.384 "bdev_lvol_get_lvols", 00:19:48.384 "bdev_lvol_get_lvstores", 00:19:48.384 "bdev_lvol_delete", 00:19:48.384 "bdev_lvol_set_read_only", 00:19:48.384 "bdev_lvol_resize", 00:19:48.384 "bdev_lvol_decouple_parent", 00:19:48.384 "bdev_lvol_inflate", 00:19:48.384 "bdev_lvol_rename", 00:19:48.384 "bdev_lvol_clone_bdev", 00:19:48.384 "bdev_lvol_clone", 00:19:48.384 "bdev_lvol_snapshot", 00:19:48.384 "bdev_lvol_create", 00:19:48.384 "bdev_lvol_delete_lvstore", 00:19:48.384 "bdev_lvol_rename_lvstore", 00:19:48.384 "bdev_lvol_create_lvstore", 00:19:48.384 "bdev_passthru_delete", 00:19:48.384 "bdev_passthru_create", 00:19:48.384 "bdev_nvme_cuse_unregister", 00:19:48.384 "bdev_nvme_cuse_register", 00:19:48.384 "bdev_opal_new_user", 00:19:48.384 "bdev_opal_set_lock_state", 00:19:48.384 "bdev_opal_delete", 00:19:48.384 "bdev_opal_get_info", 00:19:48.384 "bdev_opal_create", 00:19:48.384 "bdev_nvme_opal_revert", 00:19:48.384 "bdev_nvme_opal_init", 00:19:48.384 "bdev_nvme_send_cmd", 00:19:48.384 "bdev_nvme_get_path_iostat", 00:19:48.384 "bdev_nvme_get_mdns_discovery_info", 00:19:48.384 "bdev_nvme_stop_mdns_discovery", 00:19:48.384 "bdev_nvme_start_mdns_discovery", 00:19:48.384 "bdev_nvme_set_multipath_policy", 00:19:48.384 "bdev_nvme_set_preferred_path", 00:19:48.384 "bdev_nvme_get_io_paths", 00:19:48.384 "bdev_nvme_remove_error_injection", 00:19:48.384 "bdev_nvme_add_error_injection", 00:19:48.384 "bdev_nvme_get_discovery_info", 00:19:48.384 "bdev_nvme_stop_discovery", 00:19:48.384 "bdev_nvme_start_discovery", 00:19:48.384 "bdev_nvme_get_controller_health_info", 00:19:48.384 "bdev_nvme_disable_controller", 00:19:48.384 "bdev_nvme_enable_controller", 00:19:48.384 "bdev_nvme_reset_controller", 00:19:48.384 "bdev_nvme_get_transport_statistics", 00:19:48.384 "bdev_nvme_apply_firmware", 00:19:48.384 "bdev_nvme_detach_controller", 00:19:48.384 "bdev_nvme_get_controllers", 00:19:48.384 "bdev_nvme_attach_controller", 00:19:48.384 "bdev_nvme_set_hotplug", 00:19:48.384 "bdev_nvme_set_options", 00:19:48.384 "bdev_null_resize", 00:19:48.384 "bdev_null_delete", 00:19:48.384 "bdev_null_create", 00:19:48.384 "bdev_malloc_delete", 00:19:48.384 "bdev_malloc_create" 00:19:48.384 ] 00:19:48.384 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:19:48.384 11:13:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:48.385 11:13:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 47652 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 47652 ']' 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 47652 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47652 00:19:48.385 killing process with pid 47652 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47652' 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 47652 00:19:48.385 11:13:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 47652 00:19:50.912 ************************************ 00:19:50.912 END TEST spdkcli_tcp 00:19:50.912 ************************************ 00:19:50.912 00:19:50.912 real 0m3.969s 00:19:50.912 user 0m6.906s 00:19:50.912 sys 0m0.568s 00:19:50.912 11:13:09 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:50.912 11:13:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:50.912 11:13:09 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:50.912 11:13:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:50.912 11:13:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:50.912 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:19:50.912 ************************************ 00:19:50.912 START TEST dpdk_mem_utility 00:19:50.912 ************************************ 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:50.912 * Looking for test storage... 00:19:50.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:19:50.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.912 11:13:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:50.912 11:13:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=47788 00:19:50.912 11:13:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 47788 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 47788 ']' 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.912 11:13:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:50.912 11:13:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:50.912 [2024-05-15 11:13:09.422961] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:50.912 [2024-05-15 11:13:09.423140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47788 ] 00:19:51.170 [2024-05-15 11:13:09.588330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.428 [2024-05-15 11:13:09.808601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.993 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:51.993 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:19:51.993 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:19:51.993 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:19:51.993 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.993 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:51.993 { 00:19:51.993 "filename": "/tmp/spdk_mem_dump.txt" 00:19:51.993 } 00:19:51.993 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.993 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:52.252 DPDK memory size 868.000000 MiB in 1 heap(s) 00:19:52.252 1 heaps totaling size 868.000000 MiB 00:19:52.252 size: 868.000000 MiB heap id: 0 00:19:52.252 end heaps---------- 00:19:52.252 8 mempools totaling size 646.224487 MiB 00:19:52.252 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:19:52.252 size: 158.602051 MiB name: PDU_data_out_Pool 00:19:52.252 size: 132.629456 MiB name: bdev_io_47788 00:19:52.252 size: 51.011292 MiB name: evtpool_47788 00:19:52.252 size: 50.003479 MiB name: msgpool_47788 00:19:52.252 size: 21.763794 MiB name: PDU_Pool 00:19:52.252 size: 19.513306 MiB name: SCSI_TASK_Pool 00:19:52.252 size: 0.026123 MiB name: Session_Pool 00:19:52.252 end mempools------- 00:19:52.252 6 memzones totaling size 4.142822 MiB 00:19:52.252 size: 1.000366 MiB name: RG_ring_0_47788 00:19:52.252 size: 1.000366 MiB name: RG_ring_1_47788 00:19:52.252 size: 1.000366 MiB name: RG_ring_4_47788 00:19:52.252 size: 1.000366 MiB name: RG_ring_5_47788 00:19:52.252 size: 0.125366 MiB name: RG_ring_2_47788 00:19:52.252 size: 0.015991 MiB name: RG_ring_3_47788 00:19:52.252 end memzones------- 00:19:52.252 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:19:52.252 heap id: 0 total size: 868.000000 MiB number of busy elements: 275 number of free elements: 18 00:19:52.252 list of free elements. size: 18.349243 MiB 00:19:52.252 element at address: 0x200000400000 with size: 1.999451 MiB 00:19:52.252 element at address: 0x200000800000 with size: 1.996887 MiB 00:19:52.252 element at address: 0x200007000000 with size: 1.995972 MiB 00:19:52.252 element at address: 0x20000b200000 with size: 1.995972 MiB 00:19:52.252 element at address: 0x20001c100040 with size: 0.999939 MiB 00:19:52.252 element at address: 0x20001c500040 with size: 0.999939 MiB 00:19:52.252 element at address: 0x20001c600000 with size: 0.999084 MiB 00:19:52.252 element at address: 0x200003e00000 with size: 0.996094 MiB 00:19:52.252 element at address: 0x200035200000 with size: 0.994324 MiB 00:19:52.252 element at address: 0x20001be00000 with size: 0.959656 MiB 00:19:52.252 element at address: 0x20001c900040 with size: 0.936401 MiB 00:19:52.252 element at address: 0x200000200000 with size: 0.831421 MiB 00:19:52.252 element at address: 0x20001e000000 with size: 0.563171 MiB 00:19:52.252 element at address: 0x20001c200000 with size: 0.487976 MiB 00:19:52.252 element at address: 0x20001ca00000 with size: 0.485413 MiB 00:19:52.252 element at address: 0x20002b400000 with size: 0.397766 MiB 00:19:52.252 element at address: 0x200013800000 with size: 0.360229 MiB 00:19:52.252 element at address: 0x200003a00000 with size: 0.349548 MiB 00:19:52.252 list of standard malloc elements. size: 199.277954 MiB 00:19:52.252 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:19:52.252 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:19:52.252 element at address: 0x20001bffff80 with size: 1.000183 MiB 00:19:52.252 element at address: 0x20001c3fff80 with size: 1.000183 MiB 00:19:52.252 element at address: 0x20001c7fff80 with size: 1.000183 MiB 00:19:52.252 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:19:52.252 element at address: 0x20001c9eff40 with size: 0.062683 MiB 00:19:52.252 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:19:52.252 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:19:52.252 element at address: 0x20001c9efdc0 with size: 0.000366 MiB 00:19:52.252 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:19:52.252 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:19:52.252 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a597c0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a598c0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a599c0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59ac0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59bc0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59cc0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59dc0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59ec0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a59fc0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a5a0c0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:19:52.252 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003aff980 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003affa80 with size: 0.000244 MiB 00:19:52.253 element at address: 0x200003eff000 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c380 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c480 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c580 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c680 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c780 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c880 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001385c980 with size: 0.000244 MiB 00:19:52.253 element at address: 0x2000138dccc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001befdd00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27cec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27cfc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d0c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d1c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d2c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d3c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d4c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d5c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d6c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d7c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d8c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c27d9c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c2fdd00 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c6ffc40 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c9efbc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001c9efcc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001cabc680 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0902c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0903c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0904c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0905c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0906c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0907c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0908c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0909c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090ac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090bc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090cc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090dc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090ec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e090fc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0910c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0911c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0912c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0913c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0914c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0915c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0916c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0917c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0918c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0919c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091ac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091bc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091cc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091dc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091ec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e091fc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0920c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0921c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0922c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0923c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0924c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0925c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0926c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0927c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0928c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0929c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092ac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092bc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092cc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092dc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092ec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e092fc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0930c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0931c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0932c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0933c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0934c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0935c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0936c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0937c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0938c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0939c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093ac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093bc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093cc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093dc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093ec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e093fc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0940c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0941c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0942c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0943c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0944c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0945c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0946c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0947c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0948c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0949c0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094ac0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094bc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094cc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094dc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094ec0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e094fc0 with size: 0.000244 MiB 00:19:52.253 element at address: 0x20001e0950c0 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20001e0951c0 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20001e0952c0 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20001e0953c0 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b465d40 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b465e40 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46cb00 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46cd80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ce80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46cf80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d080 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d180 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d280 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d380 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d480 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d580 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d680 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d780 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d880 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46d980 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46da80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46db80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46dc80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46dd80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46de80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46df80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e080 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e180 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e280 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e380 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e480 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e580 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e680 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e780 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e880 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46e980 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ea80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46eb80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ec80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ed80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ee80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46ef80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f080 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f180 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f280 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f380 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f480 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f580 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f680 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f780 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f880 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46f980 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46fa80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46fb80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46fc80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46fd80 with size: 0.000244 MiB 00:19:52.254 element at address: 0x20002b46fe80 with size: 0.000244 MiB 00:19:52.254 list of memzone associated elements. size: 650.372803 MiB 00:19:52.254 element at address: 0x20001e0954c0 with size: 211.416809 MiB 00:19:52.254 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:19:52.254 element at address: 0x20002b46ff80 with size: 157.562622 MiB 00:19:52.254 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:19:52.254 element at address: 0x2000139def40 with size: 132.129089 MiB 00:19:52.254 associated memzone info: size: 132.128906 MiB name: MP_bdev_io_47788_0 00:19:52.254 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:19:52.254 associated memzone info: size: 48.002930 MiB name: MP_evtpool_47788_0 00:19:52.254 element at address: 0x200003fff340 with size: 48.003113 MiB 00:19:52.254 associated memzone info: size: 48.002930 MiB name: MP_msgpool_47788_0 00:19:52.254 element at address: 0x20001cbbe900 with size: 20.255615 MiB 00:19:52.254 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:19:52.254 element at address: 0x2000353feb00 with size: 18.005127 MiB 00:19:52.254 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:19:52.254 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:19:52.254 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_47788 00:19:52.254 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:19:52.254 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_47788 00:19:52.254 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:19:52.254 associated memzone info: size: 1.007996 MiB name: MP_evtpool_47788 00:19:52.254 element at address: 0x20001c2fde00 with size: 1.008179 MiB 00:19:52.254 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:19:52.254 element at address: 0x20001cabc780 with size: 1.008179 MiB 00:19:52.254 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:19:52.254 element at address: 0x20001befde00 with size: 1.008179 MiB 00:19:52.254 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:19:52.254 element at address: 0x2000138dcdc0 with size: 1.008179 MiB 00:19:52.254 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:19:52.254 element at address: 0x200003eff100 with size: 1.000549 MiB 00:19:52.254 associated memzone info: size: 1.000366 MiB name: RG_ring_0_47788 00:19:52.254 element at address: 0x200003affb80 with size: 1.000549 MiB 00:19:52.254 associated memzone info: size: 1.000366 MiB name: RG_ring_1_47788 00:19:52.254 element at address: 0x20001c6ffd40 with size: 1.000549 MiB 00:19:52.254 associated memzone info: size: 1.000366 MiB name: RG_ring_4_47788 00:19:52.254 element at address: 0x2000352fe8c0 with size: 1.000549 MiB 00:19:52.254 associated memzone info: size: 1.000366 MiB name: RG_ring_5_47788 00:19:52.254 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:19:52.254 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_47788 00:19:52.254 element at address: 0x20001c27dac0 with size: 0.500549 MiB 00:19:52.254 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:19:52.254 element at address: 0x20001385ca80 with size: 0.500549 MiB 00:19:52.254 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:19:52.254 element at address: 0x20001ca7c440 with size: 0.250549 MiB 00:19:52.254 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:19:52.254 element at address: 0x200003adf740 with size: 0.125549 MiB 00:19:52.254 associated memzone info: size: 0.125366 MiB name: RG_ring_2_47788 00:19:52.254 element at address: 0x20001bef5ac0 with size: 0.031799 MiB 00:19:52.254 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:19:52.254 element at address: 0x20002b465f40 with size: 0.023804 MiB 00:19:52.254 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:19:52.254 element at address: 0x200003adb500 with size: 0.016174 MiB 00:19:52.254 associated memzone info: size: 0.015991 MiB name: RG_ring_3_47788 00:19:52.254 element at address: 0x20002b46c0c0 with size: 0.002502 MiB 00:19:52.254 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:19:52.254 element at address: 0x2000002d6780 with size: 0.000366 MiB 00:19:52.254 associated memzone info: size: 0.000183 MiB name: MP_msgpool_47788 00:19:52.254 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:19:52.254 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_47788 00:19:52.254 element at address: 0x20002b46cc00 with size: 0.000366 MiB 00:19:52.254 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:19:52.254 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:19:52.254 11:13:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 47788 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 47788 ']' 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 47788 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47788 00:19:52.254 killing process with pid 47788 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47788' 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 47788 00:19:52.254 11:13:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 47788 00:19:54.793 ************************************ 00:19:54.793 END TEST dpdk_mem_utility 00:19:54.793 ************************************ 00:19:54.793 00:19:54.793 real 0m3.815s 00:19:54.793 user 0m3.688s 00:19:54.793 sys 0m0.505s 00:19:54.793 11:13:13 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:54.793 11:13:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:54.793 11:13:13 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:19:54.793 11:13:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:54.793 11:13:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:54.793 11:13:13 -- common/autotest_common.sh@10 -- # set +x 00:19:54.793 ************************************ 00:19:54.793 START TEST event 00:19:54.793 ************************************ 00:19:54.793 11:13:13 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:19:54.793 * Looking for test storage... 00:19:54.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:54.793 11:13:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:54.793 11:13:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:19:54.793 11:13:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:19:54.793 11:13:13 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:19:54.793 11:13:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:54.793 11:13:13 event -- common/autotest_common.sh@10 -- # set +x 00:19:54.793 ************************************ 00:19:54.793 START TEST event_perf 00:19:54.793 ************************************ 00:19:54.793 11:13:13 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:19:54.793 Running I/O for 1 seconds...[2024-05-15 11:13:13.186912] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:54.793 [2024-05-15 11:13:13.187134] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47911 ] 00:19:54.793 [2024-05-15 11:13:13.351648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.052 [2024-05-15 11:13:13.589516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.052 [2024-05-15 11:13:13.589629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.052 [2024-05-15 11:13:13.589691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.052 [2024-05-15 11:13:13.589912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.425 Running I/O for 1 seconds... 00:19:56.425 lcore 0: 292197 00:19:56.425 lcore 1: 292193 00:19:56.425 lcore 2: 292193 00:19:56.425 lcore 3: 292195 00:19:56.425 done. 00:19:56.425 ************************************ 00:19:56.425 END TEST event_perf 00:19:56.425 ************************************ 00:19:56.425 00:19:56.425 real 0m1.800s 00:19:56.425 user 0m4.581s 00:19:56.425 sys 0m0.112s 00:19:56.425 11:13:14 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:56.425 11:13:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 11:13:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:19:56.425 11:13:14 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:56.425 11:13:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:56.425 11:13:14 event -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 ************************************ 00:19:56.425 START TEST event_reactor 00:19:56.425 ************************************ 00:19:56.425 11:13:14 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:19:56.425 [2024-05-15 11:13:15.036653] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:56.425 [2024-05-15 11:13:15.037058] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47964 ] 00:19:56.690 [2024-05-15 11:13:15.203969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.949 [2024-05-15 11:13:15.419053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.322 test_start 00:19:58.322 oneshot 00:19:58.322 tick 100 00:19:58.322 tick 100 00:19:58.322 tick 250 00:19:58.322 tick 100 00:19:58.322 tick 100 00:19:58.322 tick 100 00:19:58.322 tick 250 00:19:58.322 tick 500 00:19:58.322 tick 100 00:19:58.322 tick 100 00:19:58.322 tick 250 00:19:58.322 tick 100 00:19:58.322 tick 100 00:19:58.322 test_end 00:19:58.322 ************************************ 00:19:58.322 END TEST event_reactor 00:19:58.322 ************************************ 00:19:58.322 00:19:58.322 real 0m1.787s 00:19:58.322 user 0m1.581s 00:19:58.322 sys 0m0.106s 00:19:58.322 11:13:16 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:58.322 11:13:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 11:13:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:19:58.322 11:13:16 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:58.322 11:13:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:58.322 11:13:16 event -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 ************************************ 00:19:58.322 START TEST event_reactor_perf 00:19:58.322 ************************************ 00:19:58.322 11:13:16 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:19:58.322 [2024-05-15 11:13:16.867713] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:19:58.322 [2024-05-15 11:13:16.868201] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48007 ] 00:19:58.580 [2024-05-15 11:13:17.042603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.839 [2024-05-15 11:13:17.294963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.213 test_start 00:20:00.213 test_end 00:20:00.213 Performance: 601222 events per second 00:20:00.213 ************************************ 00:20:00.213 END TEST event_reactor_perf 00:20:00.213 ************************************ 00:20:00.213 00:20:00.213 real 0m1.832s 00:20:00.213 user 0m1.621s 00:20:00.213 sys 0m0.110s 00:20:00.213 11:13:18 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:00.213 11:13:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 11:13:18 event -- event/event.sh@49 -- # uname -s 00:20:00.213 11:13:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:20:00.213 11:13:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:00.213 11:13:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:00.213 11:13:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:00.213 11:13:18 event -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 ************************************ 00:20:00.213 START TEST event_scheduler 00:20:00.213 ************************************ 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:00.213 * Looking for test storage... 00:20:00.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:20:00.213 11:13:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:20:00.213 11:13:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=48096 00:20:00.213 11:13:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:20:00.213 11:13:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 48096 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 48096 ']' 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:00.213 11:13:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 11:13:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:20:00.471 [2024-05-15 11:13:18.933831] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:00.471 [2024-05-15 11:13:18.934018] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48096 ] 00:20:00.471 [2024-05-15 11:13:19.097123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.037 [2024-05-15 11:13:19.395418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.037 [2024-05-15 11:13:19.395590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.037 [2024-05-15 11:13:19.395717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.037 [2024-05-15 11:13:19.395841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:20:01.295 11:13:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:01.295 POWER: Env isn't set yet! 00:20:01.295 POWER: Attempting to initialise ACPI cpufreq power management... 00:20:01.295 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:01.295 POWER: Cannot set governor of lcore 0 to userspace 00:20:01.295 POWER: Attempting to initialise PSTAT power management... 00:20:01.295 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:01.295 POWER: Cannot set governor of lcore 0 to performance 00:20:01.295 POWER: Attempting to initialise AMD PSTATE power management... 00:20:01.295 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:01.295 POWER: Cannot set governor of lcore 0 to userspace 00:20:01.295 POWER: Attempting to initialise CPPC power management... 00:20:01.295 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:01.295 POWER: Cannot set governor of lcore 0 to userspace 00:20:01.295 POWER: Attempting to initialise VM power management... 00:20:01.295 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:20:01.295 POWER: Unable to set Power Management Environment for lcore 0 00:20:01.295 [2024-05-15 11:13:19.736540] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:20:01.295 [2024-05-15 11:13:19.736568] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:20:01.295 [2024-05-15 11:13:19.736598] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.295 11:13:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.295 11:13:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 [2024-05-15 11:13:20.099494] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:20:01.552 11:13:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:20:01.552 11:13:20 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:01.552 11:13:20 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 ************************************ 00:20:01.552 START TEST scheduler_create_thread 00:20:01.552 ************************************ 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 2 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 3 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 4 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 5 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 6 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 7 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 8 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 9 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.552 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.810 10 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.810 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.811 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:02.377 ************************************ 00:20:02.377 END TEST scheduler_create_thread 00:20:02.377 ************************************ 00:20:02.377 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.377 00:20:02.377 real 0m0.595s 00:20:02.377 user 0m0.007s 00:20:02.377 sys 0m0.008s 00:20:02.377 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:02.377 11:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:02.377 11:13:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:02.377 11:13:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 48096 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 48096 ']' 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 48096 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48096 00:20:02.377 killing process with pid 48096 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48096' 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 48096 00:20:02.377 11:13:20 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 48096 00:20:02.635 [2024-05-15 11:13:21.189985] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:20:04.010 00:20:04.010 real 0m3.721s 00:20:04.010 user 0m6.352s 00:20:04.010 sys 0m0.439s 00:20:04.010 11:13:22 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:04.010 ************************************ 00:20:04.010 END TEST event_scheduler 00:20:04.010 ************************************ 00:20:04.010 11:13:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:04.010 11:13:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:20:04.010 modprobe: FATAL: Module nbd not found. 00:20:04.010 11:13:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:20:04.010 11:13:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:04.010 11:13:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:04.010 11:13:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:04.010 11:13:22 event -- common/autotest_common.sh@10 -- # set +x 00:20:04.010 ************************************ 00:20:04.010 START TEST cpu_locks 00:20:04.010 ************************************ 00:20:04.010 11:13:22 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:04.010 * Looking for test storage... 00:20:04.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:20:04.010 11:13:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:20:04.010 11:13:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:20:04.010 11:13:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:20:04.010 11:13:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:20:04.010 11:13:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:04.010 11:13:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:04.010 11:13:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:04.010 ************************************ 00:20:04.010 START TEST default_locks 00:20:04.010 ************************************ 00:20:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=48240 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 48240 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 48240 ']' 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:04.010 11:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:04.306 [2024-05-15 11:13:22.728209] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:04.306 [2024-05-15 11:13:22.728424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48240 ] 00:20:04.306 [2024-05-15 11:13:22.888966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.583 [2024-05-15 11:13:23.122711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.517 11:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:05.517 11:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:20:05.517 11:13:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 48240 00:20:05.517 11:13:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 48240 00:20:05.517 11:13:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 48240 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 48240 ']' 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 48240 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48240 00:20:06.460 killing process with pid 48240 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48240' 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 48240 00:20:06.460 11:13:24 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 48240 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 48240 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48240 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:20:08.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 48240 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 48240 ']' 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:08.997 ERROR: process (pid: 48240) is no longer running 00:20:08.997 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48240) - No such process 00:20:08.997 ************************************ 00:20:08.997 END TEST default_locks 00:20:08.997 ************************************ 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:20:08.997 11:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:08.997 00:20:08.997 real 0m4.436s 00:20:08.997 user 0m4.395s 00:20:08.997 sys 0m1.107s 00:20:08.998 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:08.998 11:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:08.998 11:13:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:20:08.998 11:13:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:08.998 11:13:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:08.998 11:13:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:08.998 ************************************ 00:20:08.998 START TEST default_locks_via_rpc 00:20:08.998 ************************************ 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=48327 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 48327 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48327 ']' 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.998 11:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:08.998 [2024-05-15 11:13:27.200917] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:08.998 [2024-05-15 11:13:27.201099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48327 ] 00:20:08.998 [2024-05-15 11:13:27.359465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.998 [2024-05-15 11:13:27.576435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 48327 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:09.931 11:13:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 48327 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 48327 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 48327 ']' 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 48327 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48327 00:20:10.865 killing process with pid 48327 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48327' 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 48327 00:20:10.865 11:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 48327 00:20:13.392 00:20:13.392 real 0m4.477s 00:20:13.392 user 0m4.512s 00:20:13.392 sys 0m1.090s 00:20:13.392 11:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:13.392 ************************************ 00:20:13.392 END TEST default_locks_via_rpc 00:20:13.392 ************************************ 00:20:13.392 11:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 11:13:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:20:13.392 11:13:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:13.392 11:13:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:13.392 11:13:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 ************************************ 00:20:13.392 START TEST non_locking_app_on_locked_coremask 00:20:13.392 ************************************ 00:20:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=48418 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 48418 /var/tmp/spdk.sock 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48418 ']' 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:13.392 11:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 [2024-05-15 11:13:31.729848] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:13.392 [2024-05-15 11:13:31.730042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48418 ] 00:20:13.392 [2024-05-15 11:13:31.881421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.650 [2024-05-15 11:13:32.097465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=48443 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 48443 /var/tmp/spdk2.sock 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48443 ']' 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.583 11:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:14.583 [2024-05-15 11:13:33.062547] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:14.583 [2024-05-15 11:13:33.062750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48443 ] 00:20:14.842 [2024-05-15 11:13:33.225062] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:14.842 [2024-05-15 11:13:33.225149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.135 [2024-05-15 11:13:33.666593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.035 11:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.035 11:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:17.035 11:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 48418 00:20:17.035 11:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48418 00:20:17.035 11:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 48418 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48418 ']' 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48418 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48418 00:20:18.934 killing process with pid 48418 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48418' 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48418 00:20:18.934 11:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48418 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 48443 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48443 ']' 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48443 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48443 00:20:23.141 killing process with pid 48443 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48443' 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48443 00:20:23.141 11:13:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48443 00:20:25.673 ************************************ 00:20:25.673 END TEST non_locking_app_on_locked_coremask 00:20:25.673 ************************************ 00:20:25.673 00:20:25.673 real 0m12.468s 00:20:25.673 user 0m12.902s 00:20:25.673 sys 0m2.253s 00:20:25.673 11:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.673 11:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:25.673 11:13:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:20:25.673 11:13:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:25.673 11:13:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:25.673 11:13:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:25.673 ************************************ 00:20:25.673 START TEST locking_app_on_unlocked_coremask 00:20:25.673 ************************************ 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=48619 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 48619 /var/tmp/spdk.sock 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48619 ']' 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.673 11:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:25.673 [2024-05-15 11:13:44.246368] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:25.673 [2024-05-15 11:13:44.246549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48619 ] 00:20:25.932 [2024-05-15 11:13:44.400743] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:25.932 [2024-05-15 11:13:44.401219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.189 [2024-05-15 11:13:44.623743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=48640 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 48640 /var/tmp/spdk2.sock 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48640 ']' 00:20:27.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:27.124 11:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:27.124 [2024-05-15 11:13:45.611493] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:27.124 [2024-05-15 11:13:45.611721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48640 ] 00:20:27.383 [2024-05-15 11:13:45.780689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.641 [2024-05-15 11:13:46.210809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.543 11:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.543 11:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:29.543 11:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 48640 00:20:29.543 11:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:29.543 11:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48640 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 48619 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48619 ']' 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 48619 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48619 00:20:31.444 killing process with pid 48619 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48619' 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 48619 00:20:31.444 11:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 48619 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 48640 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48640 ']' 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 48640 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48640 00:20:36.711 killing process with pid 48640 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48640' 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 48640 00:20:36.711 11:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 48640 00:20:38.611 00:20:38.611 real 0m12.977s 00:20:38.611 user 0m13.471s 00:20:38.611 sys 0m2.424s 00:20:38.611 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:38.611 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:38.611 ************************************ 00:20:38.611 END TEST locking_app_on_unlocked_coremask 00:20:38.611 ************************************ 00:20:38.611 11:13:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:20:38.611 11:13:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:38.611 11:13:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:38.611 11:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:38.612 ************************************ 00:20:38.612 START TEST locking_app_on_locked_coremask 00:20:38.612 ************************************ 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:20:38.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=48824 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 48824 /var/tmp/spdk.sock 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48824 ']' 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.612 11:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:38.870 [2024-05-15 11:13:57.269664] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:38.870 [2024-05-15 11:13:57.270092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48824 ] 00:20:38.870 [2024-05-15 11:13:57.423397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.128 [2024-05-15 11:13:57.679910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=48845 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 48845 /var/tmp/spdk2.sock 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48845 /var/tmp/spdk2.sock 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:20:40.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 48845 /var/tmp/spdk2.sock 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48845 ']' 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.061 11:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:40.319 [2024-05-15 11:13:58.704178] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:40.319 [2024-05-15 11:13:58.704395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48845 ] 00:20:40.319 [2024-05-15 11:13:58.865901] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 48824 has claimed it. 00:20:40.319 [2024-05-15 11:13:58.865998] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:40.887 ERROR: process (pid: 48845) is no longer running 00:20:40.887 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48845) - No such process 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 48824 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48824 00:20:40.887 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 48824 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48824 ']' 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48824 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48824 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:41.823 killing process with pid 48824 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48824' 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48824 00:20:41.823 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48824 00:20:44.420 ************************************ 00:20:44.420 END TEST locking_app_on_locked_coremask 00:20:44.420 ************************************ 00:20:44.420 00:20:44.420 real 0m5.359s 00:20:44.420 user 0m5.606s 00:20:44.420 sys 0m1.242s 00:20:44.420 11:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:44.420 11:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 11:14:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:20:44.420 11:14:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:44.420 11:14:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:44.420 11:14:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 ************************************ 00:20:44.420 START TEST locking_overlapped_coremask 00:20:44.420 ************************************ 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=48926 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 48926 /var/tmp/spdk.sock 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 48926 ']' 00:20:44.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:44.420 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 [2024-05-15 11:14:02.681188] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:44.420 [2024-05-15 11:14:02.681368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48926 ] 00:20:44.420 [2024-05-15 11:14:02.842909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:44.678 [2024-05-15 11:14:03.101487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.678 [2024-05-15 11:14:03.101592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.678 [2024-05-15 11:14:03.101601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=48956 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 48956 /var/tmp/spdk2.sock 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48956 /var/tmp/spdk2.sock 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:20:45.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 48956 /var/tmp/spdk2.sock 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 48956 ']' 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:45.612 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:45.612 [2024-05-15 11:14:04.096927] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:45.612 [2024-05-15 11:14:04.097108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48956 ] 00:20:45.871 [2024-05-15 11:14:04.300173] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48926 has claimed it. 00:20:45.871 [2024-05-15 11:14:04.300252] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:46.130 ERROR: process (pid: 48956) is no longer running 00:20:46.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48956) - No such process 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 48926 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 48926 ']' 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 48926 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48926 00:20:46.130 killing process with pid 48926 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48926' 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 48926 00:20:46.130 11:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 48926 00:20:48.668 ************************************ 00:20:48.668 END TEST locking_overlapped_coremask 00:20:48.668 ************************************ 00:20:48.668 00:20:48.668 real 0m4.413s 00:20:48.668 user 0m11.312s 00:20:48.668 sys 0m0.549s 00:20:48.668 11:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:48.668 11:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 11:14:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:20:48.668 11:14:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:48.668 11:14:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.668 11:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 ************************************ 00:20:48.668 START TEST locking_overlapped_coremask_via_rpc 00:20:48.668 ************************************ 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=49028 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 49028 /var/tmp/spdk.sock 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 49028 ']' 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.668 11:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 [2024-05-15 11:14:07.152016] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:48.668 [2024-05-15 11:14:07.152270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49028 ] 00:20:48.926 [2024-05-15 11:14:07.314731] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:48.926 [2024-05-15 11:14:07.314835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.926 [2024-05-15 11:14:07.542233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.926 [2024-05-15 11:14:07.542354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.926 [2024-05-15 11:14:07.542362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=49051 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 49051 /var/tmp/spdk2.sock 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 49051 ']' 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.861 11:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:20:50.119 [2024-05-15 11:14:08.543940] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:50.119 [2024-05-15 11:14:08.544121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49051 ] 00:20:50.119 [2024-05-15 11:14:08.747762] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:50.119 [2024-05-15 11:14:08.747839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:50.683 [2024-05-15 11:14:09.172665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.683 [2024-05-15 11:14:09.182939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:50.683 [2024-05-15 11:14:09.193824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.582 [2024-05-15 11:14:11.146054] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 49028 has claimed it. 00:20:52.582 request: 00:20:52.582 { 00:20:52.582 "method": "framework_enable_cpumask_locks", 00:20:52.582 "req_id": 1 00:20:52.582 } 00:20:52.582 Got JSON-RPC error response 00:20:52.582 response: 00:20:52.582 { 00:20:52.582 "code": -32603, 00:20:52.582 "message": "Failed to claim CPU core: 2" 00:20:52.582 } 00:20:52.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 49028 /var/tmp/spdk.sock 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 49028 ']' 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.582 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 49051 /var/tmp/spdk2.sock 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 49051 ']' 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.840 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:53.134 00:20:53.134 real 0m4.600s 00:20:53.134 user 0m1.504s 00:20:53.134 sys 0m0.168s 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.134 ************************************ 00:20:53.134 END TEST locking_overlapped_coremask_via_rpc 00:20:53.134 ************************************ 00:20:53.134 11:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:53.134 11:14:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:20:53.134 11:14:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 49028 ]] 00:20:53.134 11:14:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 49028 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 49028 ']' 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 49028 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 49028 00:20:53.134 killing process with pid 49028 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49028' 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 49028 00:20:53.134 11:14:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 49028 00:20:55.662 11:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 49051 ]] 00:20:55.662 11:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 49051 00:20:55.662 11:14:14 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 49051 ']' 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 49051 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 49051 00:20:55.663 killing process with pid 49051 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49051' 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 49051 00:20:55.663 11:14:14 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 49051 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:58.238 Process with pid 49028 is not found 00:20:58.238 Process with pid 49051 is not found 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 49028 ]] 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 49028 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 49028 ']' 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 49028 00:20:58.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (49028) - No such process 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 49028 is not found' 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 49051 ]] 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 49051 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 49051 ']' 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 49051 00:20:58.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (49051) - No such process 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 49051 is not found' 00:20:58.238 11:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:58.238 ************************************ 00:20:58.238 END TEST cpu_locks 00:20:58.238 ************************************ 00:20:58.238 00:20:58.238 real 0m53.829s 00:20:58.238 user 1m28.328s 00:20:58.238 sys 0m9.887s 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:58.238 11:14:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:58.238 ************************************ 00:20:58.238 END TEST event 00:20:58.238 ************************************ 00:20:58.238 00:20:58.238 real 1m3.288s 00:20:58.238 user 1m42.594s 00:20:58.238 sys 0m10.820s 00:20:58.238 11:14:16 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:58.238 11:14:16 event -- common/autotest_common.sh@10 -- # set +x 00:20:58.238 11:14:16 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:58.238 11:14:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:58.238 11:14:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:58.238 11:14:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.238 ************************************ 00:20:58.238 START TEST thread 00:20:58.238 ************************************ 00:20:58.238 11:14:16 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:58.238 * Looking for test storage... 00:20:58.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:20:58.238 11:14:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:58.238 11:14:16 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:20:58.238 11:14:16 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:58.238 11:14:16 thread -- common/autotest_common.sh@10 -- # set +x 00:20:58.238 ************************************ 00:20:58.238 START TEST thread_poller_perf 00:20:58.238 ************************************ 00:20:58.238 11:14:16 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:58.238 [2024-05-15 11:14:16.519317] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:58.238 [2024-05-15 11:14:16.519496] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49267 ] 00:20:58.238 [2024-05-15 11:14:16.680079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.575 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:20:58.575 [2024-05-15 11:14:16.889073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.951 ====================================== 00:20:59.951 busy:2205633980 (cyc) 00:20:59.951 total_run_count: 1246000 00:20:59.951 tsc_hz: 2200000000 (cyc) 00:20:59.951 ====================================== 00:20:59.951 poller_cost: 1770 (cyc), 804 (nsec) 00:20:59.951 00:20:59.951 real 0m1.752s 00:20:59.951 user 0m1.542s 00:20:59.951 sys 0m0.108s 00:20:59.951 11:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:59.951 11:14:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:59.951 ************************************ 00:20:59.951 END TEST thread_poller_perf 00:20:59.951 ************************************ 00:20:59.951 11:14:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:59.951 11:14:18 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:20:59.951 11:14:18 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:59.951 11:14:18 thread -- common/autotest_common.sh@10 -- # set +x 00:20:59.951 ************************************ 00:20:59.951 START TEST thread_poller_perf 00:20:59.951 ************************************ 00:20:59.951 11:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:59.951 [2024-05-15 11:14:18.322619] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:20:59.951 [2024-05-15 11:14:18.323041] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49312 ] 00:20:59.952 [2024-05-15 11:14:18.494487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.210 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:21:00.210 [2024-05-15 11:14:18.746106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.586 ====================================== 00:21:01.586 busy:2203695713 (cyc) 00:21:01.586 total_run_count: 12204000 00:21:01.586 tsc_hz: 2200000000 (cyc) 00:21:01.586 ====================================== 00:21:01.586 poller_cost: 180 (cyc), 81 (nsec) 00:21:01.586 ************************************ 00:21:01.586 END TEST thread_poller_perf 00:21:01.586 ************************************ 00:21:01.586 00:21:01.586 real 0m1.833s 00:21:01.586 user 0m1.631s 00:21:01.586 sys 0m0.101s 00:21:01.586 11:14:20 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:01.586 11:14:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:21:01.586 11:14:20 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:21:01.586 11:14:20 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:21:01.586 11:14:20 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:01.586 11:14:20 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:01.586 11:14:20 thread -- common/autotest_common.sh@10 -- # set +x 00:21:01.586 ************************************ 00:21:01.586 START TEST thread_spdk_lock 00:21:01.586 ************************************ 00:21:01.586 11:14:20 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:21:01.586 [2024-05-15 11:14:20.198271] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:01.586 [2024-05-15 11:14:20.198438] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49360 ] 00:21:01.844 [2024-05-15 11:14:20.353158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:02.102 [2024-05-15 11:14:20.577236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.102 [2024-05-15 11:14:20.577244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.670 [2024-05-15 11:14:21.071400] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:21:02.670 [2024-05-15 11:14:21.071506] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:21:02.670 [2024-05-15 11:14:21.071540] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0xc3b0c0 00:21:02.670 [2024-05-15 11:14:21.080017] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:21:02.670 [2024-05-15 11:14:21.080119] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:21:02.670 [2024-05-15 11:14:21.080169] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:21:02.928 Starting test contend 00:21:02.928 Worker Delay Wait us Hold us Total us 00:21:02.928 0 3 178960 184478 363438 00:21:02.928 1 5 96214 287170 383384 00:21:02.929 PASS test contend 00:21:02.929 Starting test hold_by_poller 00:21:02.929 PASS test hold_by_poller 00:21:02.929 Starting test hold_by_message 00:21:02.929 PASS test hold_by_message 00:21:02.929 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:21:02.929 100014 assertions passed 00:21:02.929 0 assertions failed 00:21:02.929 ************************************ 00:21:02.929 END TEST thread_spdk_lock 00:21:02.929 ************************************ 00:21:02.929 00:21:02.929 real 0m1.299s 00:21:02.929 user 0m1.595s 00:21:02.929 sys 0m0.106s 00:21:02.929 11:14:21 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:02.929 11:14:21 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 ************************************ 00:21:02.929 END TEST thread 00:21:02.929 ************************************ 00:21:02.929 00:21:02.929 real 0m5.102s 00:21:02.929 user 0m4.853s 00:21:02.929 sys 0m0.443s 00:21:02.929 11:14:21 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:02.929 11:14:21 thread -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 11:14:21 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:21:02.929 11:14:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:02.929 11:14:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:02.929 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 ************************************ 00:21:02.929 START TEST accel 00:21:02.929 ************************************ 00:21:02.929 11:14:21 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:21:03.187 * Looking for test storage... 00:21:03.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:21:03.187 11:14:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:21:03.187 11:14:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:21:03.187 11:14:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:03.187 11:14:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=49450 00:21:03.187 11:14:21 accel -- accel/accel.sh@63 -- # waitforlisten 49450 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@827 -- # '[' -z 49450 ']' 00:21:03.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:03.187 11:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:21:03.187 11:14:21 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:21:03.187 11:14:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:21:03.187 11:14:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:03.187 11:14:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:03.187 11:14:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:03.187 11:14:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:03.187 11:14:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:03.187 11:14:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:21:03.187 11:14:21 accel -- accel/accel.sh@41 -- # jq -r . 00:21:03.187 [2024-05-15 11:14:21.769480] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:03.187 [2024-05-15 11:14:21.769713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49450 ] 00:21:03.445 [2024-05-15 11:14:21.937134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.703 [2024-05-15 11:14:22.203071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@860 -- # return 0 00:21:04.640 11:14:23 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:21:04.640 11:14:23 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:21:04.640 11:14:23 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:21:04.640 11:14:23 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:21:04.640 11:14:23 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:21:04.640 11:14:23 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:21:04.640 11:14:23 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.640 11:14:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.640 11:14:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.640 11:14:23 accel -- accel/accel.sh@75 -- # killprocess 49450 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@946 -- # '[' -z 49450 ']' 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@950 -- # kill -0 49450 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@951 -- # uname 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 49450 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49450' 00:21:04.640 killing process with pid 49450 00:21:04.640 11:14:23 accel -- common/autotest_common.sh@965 -- # kill 49450 00:21:04.641 11:14:23 accel -- common/autotest_common.sh@970 -- # wait 49450 00:21:07.176 11:14:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:21:07.176 11:14:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@10 -- # set +x 00:21:07.176 11:14:25 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:21:07.176 11:14:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:21:07.176 11:14:25 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:07.176 11:14:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:21:07.176 11:14:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:07.176 11:14:25 accel -- common/autotest_common.sh@10 -- # set +x 00:21:07.176 ************************************ 00:21:07.176 START TEST accel_missing_filename 00:21:07.176 ************************************ 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.176 11:14:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:21:07.176 11:14:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:21:07.435 [2024-05-15 11:14:25.858327] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:07.435 [2024-05-15 11:14:25.858510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49548 ] 00:21:07.435 [2024-05-15 11:14:26.019279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.693 [2024-05-15 11:14:26.239097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.951 [2024-05-15 11:14:26.439740] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:08.516 [2024-05-15 11:14:26.947769] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:21:08.775 A filename is required. 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:21:08.775 ************************************ 00:21:08.775 END TEST accel_missing_filename 00:21:08.775 ************************************ 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.775 00:21:08.775 real 0m1.607s 00:21:08.775 user 0m1.309s 00:21:08.775 sys 0m0.173s 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.775 11:14:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:21:08.775 11:14:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.775 11:14:27 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:21:08.775 11:14:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.775 11:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:21:08.775 ************************************ 00:21:08.775 START TEST accel_compress_verify 00:21:08.775 ************************************ 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.775 11:14:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:21:08.775 11:14:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:21:09.033 [2024-05-15 11:14:27.512633] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:09.033 [2024-05-15 11:14:27.512833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49600 ] 00:21:09.033 [2024-05-15 11:14:27.663752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.291 [2024-05-15 11:14:27.881084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.549 [2024-05-15 11:14:28.080340] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:10.116 [2024-05-15 11:14:28.565228] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:21:10.374 00:21:10.374 Compression does not support the verify option, aborting. 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.374 00:21:10.374 real 0m1.563s 00:21:10.374 user 0m1.246s 00:21:10.374 sys 0m0.173s 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.374 11:14:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:21:10.374 ************************************ 00:21:10.374 END TEST accel_compress_verify 00:21:10.374 ************************************ 00:21:10.374 11:14:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:21:10.374 11:14:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:10.374 11:14:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:10.374 11:14:28 accel -- common/autotest_common.sh@10 -- # set +x 00:21:10.374 ************************************ 00:21:10.374 START TEST accel_wrong_workload 00:21:10.374 ************************************ 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.374 11:14:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:21:10.374 11:14:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:21:10.633 Unsupported workload type: foobar 00:21:10.633 [2024-05-15 11:14:29.121317] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:21:10.633 accel_perf options: 00:21:10.633 [-h help message] 00:21:10.633 [-q queue depth per core] 00:21:10.633 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:10.633 [-T number of threads per core 00:21:10.633 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:10.633 [-t time in seconds] 00:21:10.633 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:10.633 [ dif_verify, , dif_generate, dif_generate_copy 00:21:10.633 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:10.633 [-l for compress/decompress workloads, name of uncompressed input file 00:21:10.633 [-S for crc32c workload, use this seed value (default 0) 00:21:10.633 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:10.633 [-f for fill workload, use this BYTE value (default 255) 00:21:10.633 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:10.633 [-y verify result if this switch is on] 00:21:10.633 [-a tasks to allocate per core (default: same value as -q)] 00:21:10.633 Can be used to spread operations across a wider range of memory. 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.633 00:21:10.633 real 0m0.157s 00:21:10.633 user 0m0.087s 00:21:10.633 sys 0m0.035s 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.633 11:14:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:21:10.633 ************************************ 00:21:10.633 END TEST accel_wrong_workload 00:21:10.633 ************************************ 00:21:10.633 11:14:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:21:10.633 11:14:29 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:21:10.633 11:14:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:10.633 11:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:21:10.633 ************************************ 00:21:10.633 START TEST accel_negative_buffers 00:21:10.633 ************************************ 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.633 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:21:10.633 11:14:29 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:21:10.892 -x option must be non-negative. 00:21:10.892 [2024-05-15 11:14:29.325674] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:21:10.892 accel_perf options: 00:21:10.892 [-h help message] 00:21:10.892 [-q queue depth per core] 00:21:10.892 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:10.892 [-T number of threads per core 00:21:10.892 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:10.892 [-t time in seconds] 00:21:10.892 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:10.892 [ dif_verify, , dif_generate, dif_generate_copy 00:21:10.892 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:10.892 [-l for compress/decompress workloads, name of uncompressed input file 00:21:10.892 [-S for crc32c workload, use this seed value (default 0) 00:21:10.892 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:10.892 [-f for fill workload, use this BYTE value (default 255) 00:21:10.892 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:10.892 [-y verify result if this switch is on] 00:21:10.892 [-a tasks to allocate per core (default: same value as -q)] 00:21:10.892 Can be used to spread operations across a wider range of memory. 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.892 00:21:10.892 real 0m0.155s 00:21:10.892 user 0m0.078s 00:21:10.892 sys 0m0.034s 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.892 ************************************ 00:21:10.892 END TEST accel_negative_buffers 00:21:10.892 ************************************ 00:21:10.892 11:14:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:21:10.892 11:14:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:21:10.892 11:14:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:21:10.892 11:14:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:10.892 11:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:21:10.892 ************************************ 00:21:10.892 START TEST accel_crc32c 00:21:10.892 ************************************ 00:21:10.892 11:14:29 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:21:10.892 11:14:29 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:21:11.151 [2024-05-15 11:14:29.533484] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:11.151 [2024-05-15 11:14:29.533798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49691 ] 00:21:11.151 [2024-05-15 11:14:29.713098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.409 [2024-05-15 11:14:29.965454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.667 11:14:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:21:13.565 11:14:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:13.565 00:21:13.565 real 0m2.724s 00:21:13.565 user 0m2.374s 00:21:13.565 sys 0m0.201s 00:21:13.565 ************************************ 00:21:13.565 END TEST accel_crc32c 00:21:13.565 ************************************ 00:21:13.565 11:14:32 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.565 11:14:32 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:21:13.565 11:14:32 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:21:13.565 11:14:32 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:21:13.565 11:14:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.565 11:14:32 accel -- common/autotest_common.sh@10 -- # set +x 00:21:13.565 ************************************ 00:21:13.565 START TEST accel_crc32c_C2 00:21:13.565 ************************************ 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:13.565 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:13.566 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:13.566 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:21:13.566 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:21:13.824 [2024-05-15 11:14:32.301029] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:13.824 [2024-05-15 11:14:32.301211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49747 ] 00:21:14.081 [2024-05-15 11:14:32.469744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.081 [2024-05-15 11:14:32.708232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.341 11:14:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.242 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:16.243 00:21:16.243 real 0m2.622s 00:21:16.243 user 0m2.284s 00:21:16.243 sys 0m0.188s 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:16.243 11:14:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.243 ************************************ 00:21:16.243 END TEST accel_crc32c_C2 00:21:16.243 ************************************ 00:21:16.243 11:14:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:21:16.243 11:14:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:16.243 11:14:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.243 11:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:21:16.243 ************************************ 00:21:16.243 START TEST accel_copy 00:21:16.243 ************************************ 00:21:16.243 11:14:34 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:21:16.243 11:14:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:21:16.501 [2024-05-15 11:14:34.961662] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:16.501 [2024-05-15 11:14:34.961824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49808 ] 00:21:16.501 [2024-05-15 11:14:35.117515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.760 [2024-05-15 11:14:35.340452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:21:17.018 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:17.019 11:14:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:21:18.919 ************************************ 00:21:18.919 END TEST accel_copy 00:21:18.919 ************************************ 00:21:18.919 11:14:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:18.919 00:21:18.919 real 0m2.576s 00:21:18.919 user 0m2.254s 00:21:18.919 sys 0m0.184s 00:21:18.919 11:14:37 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:18.919 11:14:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:21:18.919 11:14:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:18.919 11:14:37 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:21:18.919 11:14:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:18.919 11:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:21:18.919 ************************************ 00:21:18.919 START TEST accel_fill 00:21:18.919 ************************************ 00:21:18.919 11:14:37 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:21:18.919 11:14:37 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:21:19.177 [2024-05-15 11:14:37.580803] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:19.177 [2024-05-15 11:14:37.581020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49866 ] 00:21:19.177 [2024-05-15 11:14:37.733879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.435 [2024-05-15 11:14:37.958522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.693 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:19.694 11:14:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 ************************************ 00:21:21.594 END TEST accel_fill 00:21:21.594 ************************************ 00:21:21.594 11:14:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:21.594 11:14:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:21:21.594 11:14:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:21.594 00:21:21.594 real 0m2.573s 00:21:21.594 user 0m2.235s 00:21:21.594 sys 0m0.187s 00:21:21.594 11:14:40 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:21.594 11:14:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:21:21.594 11:14:40 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:21:21.594 11:14:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:21.594 11:14:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:21.594 11:14:40 accel -- common/autotest_common.sh@10 -- # set +x 00:21:21.594 ************************************ 00:21:21.594 START TEST accel_copy_crc32c 00:21:21.594 ************************************ 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:21:21.594 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:21:21.594 [2024-05-15 11:14:40.200346] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:21.594 [2024-05-15 11:14:40.200529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49924 ] 00:21:21.853 [2024-05-15 11:14:40.353048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.111 [2024-05-15 11:14:40.589524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:22.370 11:14:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 ************************************ 00:21:24.270 END TEST accel_copy_crc32c 00:21:24.270 ************************************ 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:24.270 00:21:24.270 real 0m2.657s 00:21:24.270 user 0m2.322s 00:21:24.270 sys 0m0.197s 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:24.270 11:14:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:21:24.270 11:14:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:21:24.270 11:14:42 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:21:24.270 11:14:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:24.270 11:14:42 accel -- common/autotest_common.sh@10 -- # set +x 00:21:24.270 ************************************ 00:21:24.270 START TEST accel_copy_crc32c_C2 00:21:24.270 ************************************ 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:21:24.270 11:14:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:21:24.270 [2024-05-15 11:14:42.905074] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:24.270 [2024-05-15 11:14:42.905367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49987 ] 00:21:24.528 [2024-05-15 11:14:43.063729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.787 [2024-05-15 11:14:43.337246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:25.045 11:14:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:26.945 ************************************ 00:21:26.945 END TEST accel_copy_crc32c_C2 00:21:26.945 ************************************ 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.945 00:21:26.945 real 0m2.715s 00:21:26.945 user 0m2.404s 00:21:26.945 sys 0m0.185s 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:26.945 11:14:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.945 11:14:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:21:26.945 11:14:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:26.945 11:14:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:26.945 11:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:21:26.945 ************************************ 00:21:26.945 START TEST accel_dualcast 00:21:26.945 ************************************ 00:21:26.945 11:14:45 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:21:26.945 11:14:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:21:26.945 11:14:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:21:26.946 11:14:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:21:27.203 [2024-05-15 11:14:45.666087] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:27.203 [2024-05-15 11:14:45.666254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50040 ] 00:21:27.203 [2024-05-15 11:14:45.830698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.461 [2024-05-15 11:14:46.071800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.718 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.718 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:27.719 11:14:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:29.622 ************************************ 00:21:29.622 END TEST accel_dualcast 00:21:29.622 ************************************ 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:21:29.622 11:14:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:29.622 00:21:29.622 real 0m2.699s 00:21:29.622 user 0m2.353s 00:21:29.622 sys 0m0.204s 00:21:29.622 11:14:48 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:29.622 11:14:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:21:29.880 11:14:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:21:29.880 11:14:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:29.880 11:14:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:29.880 11:14:48 accel -- common/autotest_common.sh@10 -- # set +x 00:21:29.880 ************************************ 00:21:29.880 START TEST accel_compare 00:21:29.880 ************************************ 00:21:29.880 11:14:48 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:21:29.880 11:14:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:21:29.880 [2024-05-15 11:14:48.419272] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:29.880 [2024-05-15 11:14:48.419460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50103 ] 00:21:30.139 [2024-05-15 11:14:48.586782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.448 [2024-05-15 11:14:48.818476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:30.448 11:14:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 ************************************ 00:21:32.345 END TEST accel_compare 00:21:32.345 ************************************ 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:21:32.345 11:14:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:32.345 00:21:32.345 real 0m2.651s 00:21:32.345 user 0m2.311s 00:21:32.345 sys 0m0.200s 00:21:32.345 11:14:50 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:32.345 11:14:50 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:21:32.345 11:14:50 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:21:32.345 11:14:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:32.345 11:14:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:32.345 11:14:50 accel -- common/autotest_common.sh@10 -- # set +x 00:21:32.345 ************************************ 00:21:32.345 START TEST accel_xor 00:21:32.345 ************************************ 00:21:32.345 11:14:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:32.345 11:14:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:21:32.603 11:14:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:21:32.603 [2024-05-15 11:14:51.118677] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:32.603 [2024-05-15 11:14:51.119054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50161 ] 00:21:32.860 [2024-05-15 11:14:51.270367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.119 [2024-05-15 11:14:51.498746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:33.119 11:14:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:35.094 00:21:35.094 real 0m2.610s 00:21:35.094 user 0m2.284s 00:21:35.094 sys 0m0.172s 00:21:35.094 ************************************ 00:21:35.094 END TEST accel_xor 00:21:35.094 ************************************ 00:21:35.094 11:14:53 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:35.094 11:14:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:21:35.094 11:14:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:21:35.094 11:14:53 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:21:35.094 11:14:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:35.094 11:14:53 accel -- common/autotest_common.sh@10 -- # set +x 00:21:35.094 ************************************ 00:21:35.094 START TEST accel_xor 00:21:35.094 ************************************ 00:21:35.094 11:14:53 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:35.094 11:14:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:35.095 11:14:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:35.095 11:14:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:35.095 11:14:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:21:35.095 11:14:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:21:35.353 [2024-05-15 11:14:53.778491] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:35.353 [2024-05-15 11:14:53.778803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50212 ] 00:21:35.353 [2024-05-15 11:14:53.958099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.611 [2024-05-15 11:14:54.220004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:35.870 11:14:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.772 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:21:37.773 11:14:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:37.773 00:21:37.773 real 0m2.709s 00:21:37.773 user 0m2.358s 00:21:37.773 sys 0m0.207s 00:21:37.773 11:14:56 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:37.773 11:14:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:21:37.773 ************************************ 00:21:37.773 END TEST accel_xor 00:21:37.773 ************************************ 00:21:37.773 11:14:56 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:21:37.773 11:14:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:21:37.773 11:14:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:37.773 11:14:56 accel -- common/autotest_common.sh@10 -- # set +x 00:21:37.773 ************************************ 00:21:37.773 START TEST accel_dif_verify 00:21:37.773 ************************************ 00:21:37.773 11:14:56 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:21:37.773 11:14:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:21:38.032 [2024-05-15 11:14:56.529022] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:38.032 [2024-05-15 11:14:56.529226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50277 ] 00:21:38.290 [2024-05-15 11:14:56.693515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.290 [2024-05-15 11:14:56.924792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.549 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:38.550 11:14:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 ************************************ 00:21:40.451 END TEST accel_dif_verify 00:21:40.451 ************************************ 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:21:40.451 11:14:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:40.451 00:21:40.451 real 0m2.633s 00:21:40.451 user 0m2.272s 00:21:40.451 sys 0m0.224s 00:21:40.451 11:14:59 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.451 11:14:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:21:40.451 11:14:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:21:40.451 11:14:59 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:21:40.451 11:14:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:40.451 11:14:59 accel -- common/autotest_common.sh@10 -- # set +x 00:21:40.451 ************************************ 00:21:40.451 START TEST accel_dif_generate 00:21:40.451 ************************************ 00:21:40.451 11:14:59 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:40.451 11:14:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:40.452 11:14:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:40.452 11:14:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:40.452 11:14:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:21:40.452 11:14:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:21:40.711 [2024-05-15 11:14:59.211683] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:40.711 [2024-05-15 11:14:59.211872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50335 ] 00:21:40.997 [2024-05-15 11:14:59.364506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.997 [2024-05-15 11:14:59.596440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.259 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:41.260 11:14:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:21:43.161 11:15:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:43.161 ************************************ 00:21:43.161 END TEST accel_dif_generate 00:21:43.161 ************************************ 00:21:43.161 00:21:43.161 real 0m2.630s 00:21:43.161 user 0m2.317s 00:21:43.161 sys 0m0.186s 00:21:43.161 11:15:01 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:43.161 11:15:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:21:43.161 11:15:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:21:43.161 11:15:01 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:21:43.161 11:15:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:43.161 11:15:01 accel -- common/autotest_common.sh@10 -- # set +x 00:21:43.161 ************************************ 00:21:43.161 START TEST accel_dif_generate_copy 00:21:43.161 ************************************ 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:21:43.161 11:15:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:21:43.419 [2024-05-15 11:15:01.882972] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:43.419 [2024-05-15 11:15:01.883182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50391 ] 00:21:43.419 [2024-05-15 11:15:02.036263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.677 [2024-05-15 11:15:02.258758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.936 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:43.937 11:15:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.840 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:45.841 ************************************ 00:21:45.841 END TEST accel_dif_generate_copy 00:21:45.841 ************************************ 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:45.841 00:21:45.841 real 0m2.565s 00:21:45.841 user 0m2.253s 00:21:45.841 sys 0m0.178s 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:45.841 11:15:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:21:45.841 11:15:04 accel -- accel/accel.sh@115 -- # [[ n == y ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:45.841 11:15:04 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:45.841 11:15:04 accel -- accel/accel.sh@137 -- # build_accel_config 00:21:45.841 11:15:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:45.841 11:15:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:45.841 11:15:04 accel -- common/autotest_common.sh@10 -- # set +x 00:21:45.841 11:15:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:45.841 11:15:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:21:45.841 11:15:04 accel -- accel/accel.sh@41 -- # jq -r . 00:21:45.841 ************************************ 00:21:45.841 START TEST accel_dif_functional_tests 00:21:45.841 ************************************ 00:21:45.841 11:15:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:46.099 [2024-05-15 11:15:04.501604] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:46.099 [2024-05-15 11:15:04.501775] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50449 ] 00:21:46.099 [2024-05-15 11:15:04.662266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:46.358 [2024-05-15 11:15:04.890294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.358 [2024-05-15 11:15:04.890403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.358 [2024-05-15 11:15:04.890415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.924 00:21:46.924 00:21:46.924 CUnit - A unit testing framework for C - Version 2.1-3 00:21:46.924 http://cunit.sourceforge.net/ 00:21:46.924 00:21:46.924 00:21:46.924 Suite: accel_dif 00:21:46.924 Test: verify: DIF generated, GUARD check ...passed 00:21:46.924 Test: verify: DIF generated, APPTAG check ...passed 00:21:46.924 Test: verify: DIF generated, REFTAG check ...passed 00:21:46.924 Test: verify: DIF not generated, GUARD check ...[2024-05-15 11:15:05.258573] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:46.924 passed 00:21:46.924 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 11:15:05.258917] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:46.924 [2024-05-15 11:15:05.259004] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:46.924 passed 00:21:46.924 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 11:15:05.259139] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:46.924 passed 00:21:46.924 Test: verify: APPTAG correct, APPTAG check ...passed 00:21:46.924 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:21:46.924 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:21:46.924 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-05-15 11:15:05.259394] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:46.924 [2024-05-15 11:15:05.259545] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:46.924 [2024-05-15 11:15:05.259669] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:21:46.924 passed 00:21:46.924 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:21:46.924 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 11:15:05.260237] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:21:46.924 passed 00:21:46.924 Test: generate copy: DIF generated, GUARD check ...passed 00:21:46.924 Test: generate copy: DIF generated, APTTAG check ...passed 00:21:46.924 Test: generate copy: DIF generated, REFTAG check ...passed 00:21:46.924 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:21:46.924 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:21:46.924 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:21:46.924 Test: generate copy: iovecs-len validate ...[2024-05-15 11:15:05.261410] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:21:46.924 passed 00:21:46.924 Test: generate copy: buffer alignment validate ...passed 00:21:46.924 00:21:46.924 Run Summary: Type Total Ran Passed Failed Inactive 00:21:46.924 suites 1 1 n/a 0 0 00:21:46.924 tests 20 20 20 0 0 00:21:46.924 asserts 204 204 204 0 n/a 00:21:46.924 00:21:46.924 Elapsed time = 0.010 seconds 00:21:47.893 ************************************ 00:21:47.893 END TEST accel_dif_functional_tests 00:21:47.893 ************************************ 00:21:47.893 00:21:47.893 real 0m2.120s 00:21:47.893 user 0m4.128s 00:21:47.893 sys 0m0.241s 00:21:47.893 11:15:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:47.893 11:15:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:21:47.893 ************************************ 00:21:47.893 END TEST accel 00:21:47.893 ************************************ 00:21:47.893 00:21:47.893 real 0m44.970s 00:21:47.893 user 0m41.035s 00:21:47.893 sys 0m4.228s 00:21:47.893 11:15:06 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:47.893 11:15:06 accel -- common/autotest_common.sh@10 -- # set +x 00:21:48.151 11:15:06 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:21:48.151 11:15:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:48.151 11:15:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:48.151 11:15:06 -- common/autotest_common.sh@10 -- # set +x 00:21:48.151 ************************************ 00:21:48.151 START TEST accel_rpc 00:21:48.151 ************************************ 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:21:48.151 * Looking for test storage... 00:21:48.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:21:48.151 11:15:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:48.151 11:15:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=50556 00:21:48.151 11:15:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 50556 00:21:48.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 50556 ']' 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:48.151 11:15:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.151 11:15:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:48.151 [2024-05-15 11:15:06.784565] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:48.151 [2024-05-15 11:15:06.784748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50556 ] 00:21:48.409 [2024-05-15 11:15:06.938280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.667 [2024-05-15 11:15:07.158726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.241 11:15:07 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:49.241 11:15:07 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:21:49.241 11:15:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:21:49.241 11:15:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:21:49.241 11:15:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:21:49.241 11:15:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:21:49.241 11:15:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:21:49.241 11:15:07 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:49.241 11:15:07 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:49.241 11:15:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.241 ************************************ 00:21:49.241 START TEST accel_assign_opcode 00:21:49.241 ************************************ 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:49.241 [2024-05-15 11:15:07.594984] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:49.241 [2024-05-15 11:15:07.606939] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.241 11:15:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:21:49.806 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.064 software 00:21:50.064 00:21:50.064 real 0m0.899s 00:21:50.064 user 0m0.064s 00:21:50.064 sys 0m0.009s 00:21:50.064 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:50.064 11:15:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:50.064 ************************************ 00:21:50.064 END TEST accel_assign_opcode 00:21:50.064 ************************************ 00:21:50.064 11:15:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 50556 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 50556 ']' 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 50556 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50556 00:21:50.064 killing process with pid 50556 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50556' 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@965 -- # kill 50556 00:21:50.064 11:15:08 accel_rpc -- common/autotest_common.sh@970 -- # wait 50556 00:21:52.593 ************************************ 00:21:52.593 END TEST accel_rpc 00:21:52.593 ************************************ 00:21:52.593 00:21:52.593 real 0m4.248s 00:21:52.593 user 0m4.063s 00:21:52.593 sys 0m0.546s 00:21:52.593 11:15:10 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:52.593 11:15:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.593 11:15:10 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:52.593 11:15:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:52.593 11:15:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:52.593 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:21:52.593 ************************************ 00:21:52.593 START TEST app_cmdline 00:21:52.593 ************************************ 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:52.593 * Looking for test storage... 00:21:52.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:52.593 11:15:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:21:52.593 11:15:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=50700 00:21:52.593 11:15:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 50700 00:21:52.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 50700 ']' 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.593 11:15:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:52.593 11:15:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:52.593 [2024-05-15 11:15:11.083890] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:52.593 [2024-05-15 11:15:11.084075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50700 ] 00:21:52.852 [2024-05-15 11:15:11.235660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.852 [2024-05-15 11:15:11.456107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.832 11:15:12 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:53.832 11:15:12 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:21:53.832 11:15:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:21:53.832 { 00:21:53.832 "version": "SPDK v24.05-pre git sha1 b7a2519d9", 00:21:53.832 "fields": { 00:21:53.832 "major": 24, 00:21:53.832 "minor": 5, 00:21:53.832 "patch": 0, 00:21:53.832 "suffix": "-pre", 00:21:53.832 "commit": "b7a2519d9" 00:21:53.832 } 00:21:53.832 } 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:21:54.100 11:15:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:54.100 11:15:12 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:54.100 request: 00:21:54.100 { 00:21:54.100 "method": "env_dpdk_get_mem_stats", 00:21:54.100 "req_id": 1 00:21:54.100 } 00:21:54.100 Got JSON-RPC error response 00:21:54.100 response: 00:21:54.100 { 00:21:54.100 "code": -32601, 00:21:54.100 "message": "Method not found" 00:21:54.100 } 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.359 11:15:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 50700 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 50700 ']' 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 50700 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50700 00:21:54.359 killing process with pid 50700 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50700' 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@965 -- # kill 50700 00:21:54.359 11:15:12 app_cmdline -- common/autotest_common.sh@970 -- # wait 50700 00:21:56.890 ************************************ 00:21:56.890 END TEST app_cmdline 00:21:56.890 ************************************ 00:21:56.890 00:21:56.890 real 0m4.146s 00:21:56.890 user 0m4.350s 00:21:56.890 sys 0m0.553s 00:21:56.890 11:15:15 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:56.890 11:15:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:56.890 11:15:15 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:56.890 11:15:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:56.890 11:15:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:56.890 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:21:56.890 ************************************ 00:21:56.890 START TEST version 00:21:56.890 ************************************ 00:21:56.890 11:15:15 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:56.890 * Looking for test storage... 00:21:56.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:56.890 11:15:15 version -- app/version.sh@17 -- # get_header_version major 00:21:56.890 11:15:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # cut -f2 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # tr -d '"' 00:21:56.890 11:15:15 version -- app/version.sh@17 -- # major=24 00:21:56.890 11:15:15 version -- app/version.sh@18 -- # get_header_version minor 00:21:56.890 11:15:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # tr -d '"' 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # cut -f2 00:21:56.890 11:15:15 version -- app/version.sh@18 -- # minor=5 00:21:56.890 11:15:15 version -- app/version.sh@19 -- # get_header_version patch 00:21:56.890 11:15:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # cut -f2 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # tr -d '"' 00:21:56.890 11:15:15 version -- app/version.sh@19 -- # patch=0 00:21:56.890 11:15:15 version -- app/version.sh@20 -- # get_header_version suffix 00:21:56.890 11:15:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # cut -f2 00:21:56.890 11:15:15 version -- app/version.sh@14 -- # tr -d '"' 00:21:56.890 11:15:15 version -- app/version.sh@20 -- # suffix=-pre 00:21:56.890 11:15:15 version -- app/version.sh@22 -- # version=24.5 00:21:56.890 11:15:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:21:56.890 11:15:15 version -- app/version.sh@28 -- # version=24.5rc0 00:21:56.890 11:15:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:21:56.890 11:15:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:21:56.891 11:15:15 version -- app/version.sh@30 -- # py_version=24.5rc0 00:21:56.891 11:15:15 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:21:56.891 ************************************ 00:21:56.891 END TEST version 00:21:56.891 ************************************ 00:21:56.891 00:21:56.891 real 0m0.138s 00:21:56.891 user 0m0.085s 00:21:56.891 sys 0m0.085s 00:21:56.891 11:15:15 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:56.891 11:15:15 version -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 11:15:15 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:21:56.891 11:15:15 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:21:56.891 11:15:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:56.891 11:15:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:56.891 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 ************************************ 00:21:56.891 START TEST blockdev_general 00:21:56.891 ************************************ 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:21:56.891 * Looking for test storage... 00:21:56.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:56.891 11:15:15 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:21:56.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=50911 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 50911 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 50911 ']' 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.891 11:15:15 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:56.891 11:15:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 [2024-05-15 11:15:15.461525] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:21:56.891 [2024-05-15 11:15:15.461738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50911 ] 00:21:57.149 [2024-05-15 11:15:15.613398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.406 [2024-05-15 11:15:15.832186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.665 11:15:16 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.665 11:15:16 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:21:57.665 11:15:16 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:21:57.665 11:15:16 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:21:57.665 11:15:16 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:21:57.665 11:15:16 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 11:15:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:58.599 [2024-05-15 11:15:17.072461] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:58.599 [2024-05-15 11:15:17.072580] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:58.599 00:21:58.599 [2024-05-15 11:15:17.080414] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:58.599 [2024-05-15 11:15:17.080497] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:58.599 00:21:58.599 Malloc0 00:21:58.599 Malloc1 00:21:58.599 Malloc2 00:21:58.858 Malloc3 00:21:58.858 Malloc4 00:21:58.858 Malloc5 00:21:58.858 Malloc6 00:21:58.858 Malloc7 00:21:58.858 Malloc8 00:21:58.858 Malloc9 00:21:58.858 [2024-05-15 11:15:17.485566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:58.858 [2024-05-15 11:15:17.485656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.858 [2024-05-15 11:15:17.485708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:21:58.858 [2024-05-15 11:15:17.485738] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.858 [2024-05-15 11:15:17.487627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.858 [2024-05-15 11:15:17.487692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:21:58.858 TestPT 00:21:59.117 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.117 11:15:17 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:21:59.117 5000+0 records in 00:21:59.117 5000+0 records out 00:21:59.117 10240000 bytes (10 MB) copied, 0.0211031 s, 485 MB/s 00:21:59.117 11:15:17 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:21:59.117 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.117 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.117 AIO0 00:21:59.117 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:21:59.118 11:15:17 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:21:59.118 11:15:17 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:21:59.119 11:15:17 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e6241df8-5c70-444c-a5a6-bf36e5ba14e4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e6241df8-5c70-444c-a5a6-bf36e5ba14e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "deff929f-23ef-5c8c-8ca0-002f5e896243"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "deff929f-23ef-5c8c-8ca0-002f5e896243",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b856968b-3a21-56b8-98f6-9a3152ceb52b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b856968b-3a21-56b8-98f6-9a3152ceb52b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1b5d8175-0295-547a-a15a-4548869a1377"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1b5d8175-0295-547a-a15a-4548869a1377",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d5b33b60-b91e-5fe9-a947-13e8925b49c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d5b33b60-b91e-5fe9-a947-13e8925b49c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5be64d38-5a0f-563d-b4ba-ae3bba94b03c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5be64d38-5a0f-563d-b4ba-ae3bba94b03c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f02d28d8-0356-548f-af9c-4f0d43480df1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f02d28d8-0356-548f-af9c-4f0d43480df1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "38210f86-5a60-5997-8a1b-1afc859200f9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38210f86-5a60-5997-8a1b-1afc859200f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "495fe434-9473-52a5-85a3-84e46b39ccc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "495fe434-9473-52a5-85a3-84e46b39ccc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "69e5dad2-4bf8-595e-aa53-3325b304caa0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "69e5dad2-4bf8-595e-aa53-3325b304caa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "05b20028-2822-40f8-85e6-5ce9531cb40f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f916ea73-d078-4439-bc32-72194f3ad595",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "05613b33-997f-4117-b145-a7219e7b4b58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6029ef60-0937-41fb-a753-8b256a9968e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ec8dc7ee-4394-4c7a-8b8d-74ee77462520",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a94ac6b6-1b36-4b97-8064-185aeb1b5f81"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e6e25d7c-e3b0-43c9-8384-c84dbf12683c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0b0e086d-0756-4be4-ae9f-a501182ac83b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ac603c2e-c8fb-4779-a1f5-5eeb5152465a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ac603c2e-c8fb-4779-a1f5-5eeb5152465a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:21:59.377 11:15:17 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:21:59.377 11:15:17 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:21:59.377 11:15:17 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:21:59.377 11:15:17 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 50911 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 50911 ']' 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 50911 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50911 00:21:59.377 killing process with pid 50911 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50911' 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@965 -- # kill 50911 00:21:59.377 11:15:17 blockdev_general -- common/autotest_common.sh@970 -- # wait 50911 00:22:02.866 11:15:20 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:02.866 11:15:20 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:22:02.866 11:15:20 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:02.866 11:15:20 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:02.866 11:15:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:02.866 ************************************ 00:22:02.866 START TEST bdev_hello_world 00:22:02.866 ************************************ 00:22:02.866 11:15:21 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:22:02.866 [2024-05-15 11:15:21.150518] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:22:02.866 [2024-05-15 11:15:21.150710] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51002 ] 00:22:02.866 [2024-05-15 11:15:21.303254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.125 [2024-05-15 11:15:21.527034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.383 [2024-05-15 11:15:21.962017] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:03.383 [2024-05-15 11:15:21.962134] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:03.383 [2024-05-15 11:15:21.969978] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:03.383 [2024-05-15 11:15:21.970029] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:03.383 [2024-05-15 11:15:21.978007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:03.383 [2024-05-15 11:15:21.978059] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:22:03.383 [2024-05-15 11:15:21.978101] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:22:03.642 [2024-05-15 11:15:22.156240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:03.642 [2024-05-15 11:15:22.156358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.642 [2024-05-15 11:15:22.156415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:22:03.642 [2024-05-15 11:15:22.156450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.642 [2024-05-15 11:15:22.158549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.642 [2024-05-15 11:15:22.158595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:22:03.899 [2024-05-15 11:15:22.446388] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:03.899 [2024-05-15 11:15:22.446460] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:22:03.899 [2024-05-15 11:15:22.446524] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:03.899 [2024-05-15 11:15:22.446575] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:03.899 [2024-05-15 11:15:22.446637] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:03.899 [2024-05-15 11:15:22.446668] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:03.899 [2024-05-15 11:15:22.446736] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:03.899 00:22:03.899 [2024-05-15 11:15:22.446774] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:06.430 ************************************ 00:22:06.430 END TEST bdev_hello_world 00:22:06.430 ************************************ 00:22:06.430 00:22:06.430 real 0m3.651s 00:22:06.430 user 0m3.058s 00:22:06.430 sys 0m0.385s 00:22:06.430 11:15:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:06.430 11:15:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:06.430 11:15:24 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:22:06.430 11:15:24 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:06.430 11:15:24 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:06.430 11:15:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:06.430 ************************************ 00:22:06.430 START TEST bdev_bounds 00:22:06.430 ************************************ 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=51077 00:22:06.430 Process bdevio pid: 51077 00:22:06.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 51077' 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 51077 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 51077 ']' 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:06.430 11:15:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:06.430 [2024-05-15 11:15:24.852878] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:22:06.430 [2024-05-15 11:15:24.853057] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51077 ] 00:22:06.430 [2024-05-15 11:15:25.008046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.689 [2024-05-15 11:15:25.250686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.689 [2024-05-15 11:15:25.250837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.689 [2024-05-15 11:15:25.250841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.256 [2024-05-15 11:15:25.690674] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:07.257 [2024-05-15 11:15:25.690842] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:07.257 [2024-05-15 11:15:25.698669] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:07.257 [2024-05-15 11:15:25.698740] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:07.257 [2024-05-15 11:15:25.706680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:07.257 [2024-05-15 11:15:25.706744] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:22:07.257 [2024-05-15 11:15:25.706772] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:22:07.257 [2024-05-15 11:15:25.889632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:07.257 [2024-05-15 11:15:25.889735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.257 [2024-05-15 11:15:25.889971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002c180 00:22:07.257 [2024-05-15 11:15:25.890012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.515 [2024-05-15 11:15:25.891946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.515 [2024-05-15 11:15:25.892010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:22:07.773 11:15:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:07.774 11:15:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:22:07.774 11:15:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:07.774 I/O targets: 00:22:07.774 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:22:07.774 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:22:07.774 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:22:07.774 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:22:07.774 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:22:07.774 raid0: 131072 blocks of 512 bytes (64 MiB) 00:22:07.774 concat0: 131072 blocks of 512 bytes (64 MiB) 00:22:07.774 raid1: 65536 blocks of 512 bytes (32 MiB) 00:22:07.774 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:22:07.774 00:22:07.774 00:22:07.774 CUnit - A unit testing framework for C - Version 2.1-3 00:22:07.774 http://cunit.sourceforge.net/ 00:22:07.774 00:22:07.774 00:22:07.774 Suite: bdevio tests on: AIO0 00:22:07.774 Test: blockdev write read block ...passed 00:22:07.774 Test: blockdev write zeroes read block ...passed 00:22:07.774 Test: blockdev write zeroes read no split ...passed 00:22:07.774 Test: blockdev write zeroes read split ...passed 00:22:07.774 Test: blockdev write zeroes read split partial ...passed 00:22:07.774 Test: blockdev reset ...passed 00:22:07.774 Test: blockdev write read 8 blocks ...passed 00:22:07.774 Test: blockdev write read size > 128k ...passed 00:22:07.774 Test: blockdev write read invalid size ...passed 00:22:07.774 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:07.774 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:07.774 Test: blockdev write read max offset ...passed 00:22:08.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.032 Test: blockdev writev readv 8 blocks ...passed 00:22:08.032 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.032 Test: blockdev writev readv block ...passed 00:22:08.032 Test: blockdev writev readv size > 128k ...passed 00:22:08.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.032 Test: blockdev comparev and writev ...passed 00:22:08.032 Test: blockdev nvme passthru rw ...passed 00:22:08.032 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.032 Test: blockdev nvme admin passthru ...passed 00:22:08.032 Test: blockdev copy ...passed 00:22:08.032 Suite: bdevio tests on: raid1 00:22:08.032 Test: blockdev write read block ...passed 00:22:08.032 Test: blockdev write zeroes read block ...passed 00:22:08.032 Test: blockdev write zeroes read no split ...passed 00:22:08.032 Test: blockdev write zeroes read split ...passed 00:22:08.032 Test: blockdev write zeroes read split partial ...passed 00:22:08.032 Test: blockdev reset ...passed 00:22:08.032 Test: blockdev write read 8 blocks ...passed 00:22:08.032 Test: blockdev write read size > 128k ...passed 00:22:08.032 Test: blockdev write read invalid size ...passed 00:22:08.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.032 Test: blockdev write read max offset ...passed 00:22:08.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.032 Test: blockdev writev readv 8 blocks ...passed 00:22:08.032 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.032 Test: blockdev writev readv block ...passed 00:22:08.032 Test: blockdev writev readv size > 128k ...passed 00:22:08.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.032 Test: blockdev comparev and writev ...passed 00:22:08.032 Test: blockdev nvme passthru rw ...passed 00:22:08.032 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.032 Test: blockdev nvme admin passthru ...passed 00:22:08.032 Test: blockdev copy ...passed 00:22:08.032 Suite: bdevio tests on: concat0 00:22:08.032 Test: blockdev write read block ...passed 00:22:08.032 Test: blockdev write zeroes read block ...passed 00:22:08.032 Test: blockdev write zeroes read no split ...passed 00:22:08.032 Test: blockdev write zeroes read split ...passed 00:22:08.032 Test: blockdev write zeroes read split partial ...passed 00:22:08.032 Test: blockdev reset ...passed 00:22:08.032 Test: blockdev write read 8 blocks ...passed 00:22:08.032 Test: blockdev write read size > 128k ...passed 00:22:08.032 Test: blockdev write read invalid size ...passed 00:22:08.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.032 Test: blockdev write read max offset ...passed 00:22:08.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.032 Test: blockdev writev readv 8 blocks ...passed 00:22:08.032 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.032 Test: blockdev writev readv block ...passed 00:22:08.032 Test: blockdev writev readv size > 128k ...passed 00:22:08.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.032 Test: blockdev comparev and writev ...passed 00:22:08.032 Test: blockdev nvme passthru rw ...passed 00:22:08.032 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.032 Test: blockdev nvme admin passthru ...passed 00:22:08.032 Test: blockdev copy ...passed 00:22:08.032 Suite: bdevio tests on: raid0 00:22:08.032 Test: blockdev write read block ...passed 00:22:08.032 Test: blockdev write zeroes read block ...passed 00:22:08.032 Test: blockdev write zeroes read no split ...passed 00:22:08.032 Test: blockdev write zeroes read split ...passed 00:22:08.032 Test: blockdev write zeroes read split partial ...passed 00:22:08.032 Test: blockdev reset ...passed 00:22:08.032 Test: blockdev write read 8 blocks ...passed 00:22:08.032 Test: blockdev write read size > 128k ...passed 00:22:08.032 Test: blockdev write read invalid size ...passed 00:22:08.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.032 Test: blockdev write read max offset ...passed 00:22:08.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.032 Test: blockdev writev readv 8 blocks ...passed 00:22:08.032 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.032 Test: blockdev writev readv block ...passed 00:22:08.032 Test: blockdev writev readv size > 128k ...passed 00:22:08.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.032 Test: blockdev comparev and writev ...passed 00:22:08.032 Test: blockdev nvme passthru rw ...passed 00:22:08.032 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.032 Test: blockdev nvme admin passthru ...passed 00:22:08.032 Test: blockdev copy ...passed 00:22:08.032 Suite: bdevio tests on: TestPT 00:22:08.032 Test: blockdev write read block ...passed 00:22:08.033 Test: blockdev write zeroes read block ...passed 00:22:08.033 Test: blockdev write zeroes read no split ...passed 00:22:08.033 Test: blockdev write zeroes read split ...passed 00:22:08.291 Test: blockdev write zeroes read split partial ...passed 00:22:08.291 Test: blockdev reset ...passed 00:22:08.291 Test: blockdev write read 8 blocks ...passed 00:22:08.291 Test: blockdev write read size > 128k ...passed 00:22:08.291 Test: blockdev write read invalid size ...passed 00:22:08.291 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.291 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.291 Test: blockdev write read max offset ...passed 00:22:08.291 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.291 Test: blockdev writev readv 8 blocks ...passed 00:22:08.291 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.291 Test: blockdev writev readv block ...passed 00:22:08.291 Test: blockdev writev readv size > 128k ...passed 00:22:08.291 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.291 Test: blockdev comparev and writev ...passed 00:22:08.291 Test: blockdev nvme passthru rw ...passed 00:22:08.291 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.291 Test: blockdev nvme admin passthru ...passed 00:22:08.291 Test: blockdev copy ...passed 00:22:08.291 Suite: bdevio tests on: Malloc2p7 00:22:08.291 Test: blockdev write read block ...passed 00:22:08.291 Test: blockdev write zeroes read block ...passed 00:22:08.291 Test: blockdev write zeroes read no split ...passed 00:22:08.291 Test: blockdev write zeroes read split ...passed 00:22:08.291 Test: blockdev write zeroes read split partial ...passed 00:22:08.291 Test: blockdev reset ...passed 00:22:08.291 Test: blockdev write read 8 blocks ...passed 00:22:08.291 Test: blockdev write read size > 128k ...passed 00:22:08.291 Test: blockdev write read invalid size ...passed 00:22:08.291 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.291 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.291 Test: blockdev write read max offset ...passed 00:22:08.291 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.291 Test: blockdev writev readv 8 blocks ...passed 00:22:08.291 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.291 Test: blockdev writev readv block ...passed 00:22:08.291 Test: blockdev writev readv size > 128k ...passed 00:22:08.291 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.291 Test: blockdev comparev and writev ...passed 00:22:08.291 Test: blockdev nvme passthru rw ...passed 00:22:08.291 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.291 Test: blockdev nvme admin passthru ...passed 00:22:08.291 Test: blockdev copy ...passed 00:22:08.291 Suite: bdevio tests on: Malloc2p6 00:22:08.291 Test: blockdev write read block ...passed 00:22:08.291 Test: blockdev write zeroes read block ...passed 00:22:08.291 Test: blockdev write zeroes read no split ...passed 00:22:08.291 Test: blockdev write zeroes read split ...passed 00:22:08.291 Test: blockdev write zeroes read split partial ...passed 00:22:08.291 Test: blockdev reset ...passed 00:22:08.291 Test: blockdev write read 8 blocks ...passed 00:22:08.291 Test: blockdev write read size > 128k ...passed 00:22:08.291 Test: blockdev write read invalid size ...passed 00:22:08.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.292 Test: blockdev write read max offset ...passed 00:22:08.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.292 Test: blockdev writev readv 8 blocks ...passed 00:22:08.292 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.292 Test: blockdev writev readv block ...passed 00:22:08.292 Test: blockdev writev readv size > 128k ...passed 00:22:08.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.292 Test: blockdev comparev and writev ...passed 00:22:08.292 Test: blockdev nvme passthru rw ...passed 00:22:08.292 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.292 Test: blockdev nvme admin passthru ...passed 00:22:08.292 Test: blockdev copy ...passed 00:22:08.292 Suite: bdevio tests on: Malloc2p5 00:22:08.292 Test: blockdev write read block ...passed 00:22:08.292 Test: blockdev write zeroes read block ...passed 00:22:08.292 Test: blockdev write zeroes read no split ...passed 00:22:08.292 Test: blockdev write zeroes read split ...passed 00:22:08.292 Test: blockdev write zeroes read split partial ...passed 00:22:08.292 Test: blockdev reset ...passed 00:22:08.292 Test: blockdev write read 8 blocks ...passed 00:22:08.292 Test: blockdev write read size > 128k ...passed 00:22:08.292 Test: blockdev write read invalid size ...passed 00:22:08.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.292 Test: blockdev write read max offset ...passed 00:22:08.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.292 Test: blockdev writev readv 8 blocks ...passed 00:22:08.292 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.292 Test: blockdev writev readv block ...passed 00:22:08.292 Test: blockdev writev readv size > 128k ...passed 00:22:08.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.292 Test: blockdev comparev and writev ...passed 00:22:08.292 Test: blockdev nvme passthru rw ...passed 00:22:08.292 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.292 Test: blockdev nvme admin passthru ...passed 00:22:08.292 Test: blockdev copy ...passed 00:22:08.292 Suite: bdevio tests on: Malloc2p4 00:22:08.292 Test: blockdev write read block ...passed 00:22:08.292 Test: blockdev write zeroes read block ...passed 00:22:08.292 Test: blockdev write zeroes read no split ...passed 00:22:08.292 Test: blockdev write zeroes read split ...passed 00:22:08.550 Test: blockdev write zeroes read split partial ...passed 00:22:08.550 Test: blockdev reset ...passed 00:22:08.550 Test: blockdev write read 8 blocks ...passed 00:22:08.550 Test: blockdev write read size > 128k ...passed 00:22:08.550 Test: blockdev write read invalid size ...passed 00:22:08.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.550 Test: blockdev write read max offset ...passed 00:22:08.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.550 Test: blockdev writev readv 8 blocks ...passed 00:22:08.550 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.550 Test: blockdev writev readv block ...passed 00:22:08.550 Test: blockdev writev readv size > 128k ...passed 00:22:08.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.550 Test: blockdev comparev and writev ...passed 00:22:08.550 Test: blockdev nvme passthru rw ...passed 00:22:08.550 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.550 Test: blockdev nvme admin passthru ...passed 00:22:08.550 Test: blockdev copy ...passed 00:22:08.550 Suite: bdevio tests on: Malloc2p3 00:22:08.550 Test: blockdev write read block ...passed 00:22:08.550 Test: blockdev write zeroes read block ...passed 00:22:08.550 Test: blockdev write zeroes read no split ...passed 00:22:08.550 Test: blockdev write zeroes read split ...passed 00:22:08.550 Test: blockdev write zeroes read split partial ...passed 00:22:08.550 Test: blockdev reset ...passed 00:22:08.550 Test: blockdev write read 8 blocks ...passed 00:22:08.550 Test: blockdev write read size > 128k ...passed 00:22:08.550 Test: blockdev write read invalid size ...passed 00:22:08.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.550 Test: blockdev write read max offset ...passed 00:22:08.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.550 Test: blockdev writev readv 8 blocks ...passed 00:22:08.550 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.551 Test: blockdev writev readv block ...passed 00:22:08.551 Test: blockdev writev readv size > 128k ...passed 00:22:08.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.551 Test: blockdev comparev and writev ...passed 00:22:08.551 Test: blockdev nvme passthru rw ...passed 00:22:08.551 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.551 Test: blockdev nvme admin passthru ...passed 00:22:08.551 Test: blockdev copy ...passed 00:22:08.551 Suite: bdevio tests on: Malloc2p2 00:22:08.551 Test: blockdev write read block ...passed 00:22:08.551 Test: blockdev write zeroes read block ...passed 00:22:08.551 Test: blockdev write zeroes read no split ...passed 00:22:08.551 Test: blockdev write zeroes read split ...passed 00:22:08.551 Test: blockdev write zeroes read split partial ...passed 00:22:08.551 Test: blockdev reset ...passed 00:22:08.551 Test: blockdev write read 8 blocks ...passed 00:22:08.551 Test: blockdev write read size > 128k ...passed 00:22:08.551 Test: blockdev write read invalid size ...passed 00:22:08.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.551 Test: blockdev write read max offset ...passed 00:22:08.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.551 Test: blockdev writev readv 8 blocks ...passed 00:22:08.551 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.551 Test: blockdev writev readv block ...passed 00:22:08.551 Test: blockdev writev readv size > 128k ...passed 00:22:08.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.551 Test: blockdev comparev and writev ...passed 00:22:08.551 Test: blockdev nvme passthru rw ...passed 00:22:08.551 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.551 Test: blockdev nvme admin passthru ...passed 00:22:08.551 Test: blockdev copy ...passed 00:22:08.551 Suite: bdevio tests on: Malloc2p1 00:22:08.551 Test: blockdev write read block ...passed 00:22:08.551 Test: blockdev write zeroes read block ...passed 00:22:08.551 Test: blockdev write zeroes read no split ...passed 00:22:08.551 Test: blockdev write zeroes read split ...passed 00:22:08.551 Test: blockdev write zeroes read split partial ...passed 00:22:08.551 Test: blockdev reset ...passed 00:22:08.551 Test: blockdev write read 8 blocks ...passed 00:22:08.551 Test: blockdev write read size > 128k ...passed 00:22:08.551 Test: blockdev write read invalid size ...passed 00:22:08.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.551 Test: blockdev write read max offset ...passed 00:22:08.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.551 Test: blockdev writev readv 8 blocks ...passed 00:22:08.551 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.551 Test: blockdev writev readv block ...passed 00:22:08.551 Test: blockdev writev readv size > 128k ...passed 00:22:08.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.551 Test: blockdev comparev and writev ...passed 00:22:08.551 Test: blockdev nvme passthru rw ...passed 00:22:08.551 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.551 Test: blockdev nvme admin passthru ...passed 00:22:08.551 Test: blockdev copy ...passed 00:22:08.551 Suite: bdevio tests on: Malloc2p0 00:22:08.551 Test: blockdev write read block ...passed 00:22:08.551 Test: blockdev write zeroes read block ...passed 00:22:08.551 Test: blockdev write zeroes read no split ...passed 00:22:08.551 Test: blockdev write zeroes read split ...passed 00:22:08.810 Test: blockdev write zeroes read split partial ...passed 00:22:08.810 Test: blockdev reset ...passed 00:22:08.810 Test: blockdev write read 8 blocks ...passed 00:22:08.810 Test: blockdev write read size > 128k ...passed 00:22:08.810 Test: blockdev write read invalid size ...passed 00:22:08.810 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.810 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.810 Test: blockdev write read max offset ...passed 00:22:08.810 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.810 Test: blockdev writev readv 8 blocks ...passed 00:22:08.810 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.810 Test: blockdev writev readv block ...passed 00:22:08.810 Test: blockdev writev readv size > 128k ...passed 00:22:08.810 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.810 Test: blockdev comparev and writev ...passed 00:22:08.810 Test: blockdev nvme passthru rw ...passed 00:22:08.810 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.810 Test: blockdev nvme admin passthru ...passed 00:22:08.810 Test: blockdev copy ...passed 00:22:08.810 Suite: bdevio tests on: Malloc1p1 00:22:08.810 Test: blockdev write read block ...passed 00:22:08.810 Test: blockdev write zeroes read block ...passed 00:22:08.810 Test: blockdev write zeroes read no split ...passed 00:22:08.810 Test: blockdev write zeroes read split ...passed 00:22:08.810 Test: blockdev write zeroes read split partial ...passed 00:22:08.810 Test: blockdev reset ...passed 00:22:08.810 Test: blockdev write read 8 blocks ...passed 00:22:08.810 Test: blockdev write read size > 128k ...passed 00:22:08.810 Test: blockdev write read invalid size ...passed 00:22:08.810 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.810 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.810 Test: blockdev write read max offset ...passed 00:22:08.810 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.810 Test: blockdev writev readv 8 blocks ...passed 00:22:08.810 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.810 Test: blockdev writev readv block ...passed 00:22:08.810 Test: blockdev writev readv size > 128k ...passed 00:22:08.810 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.810 Test: blockdev comparev and writev ...passed 00:22:08.810 Test: blockdev nvme passthru rw ...passed 00:22:08.810 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.810 Test: blockdev nvme admin passthru ...passed 00:22:08.810 Test: blockdev copy ...passed 00:22:08.810 Suite: bdevio tests on: Malloc1p0 00:22:08.810 Test: blockdev write read block ...passed 00:22:08.810 Test: blockdev write zeroes read block ...passed 00:22:08.810 Test: blockdev write zeroes read no split ...passed 00:22:08.810 Test: blockdev write zeroes read split ...passed 00:22:08.810 Test: blockdev write zeroes read split partial ...passed 00:22:08.810 Test: blockdev reset ...passed 00:22:08.810 Test: blockdev write read 8 blocks ...passed 00:22:08.810 Test: blockdev write read size > 128k ...passed 00:22:08.810 Test: blockdev write read invalid size ...passed 00:22:08.810 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.810 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.810 Test: blockdev write read max offset ...passed 00:22:08.810 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.810 Test: blockdev writev readv 8 blocks ...passed 00:22:08.810 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.810 Test: blockdev writev readv block ...passed 00:22:08.810 Test: blockdev writev readv size > 128k ...passed 00:22:08.810 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.810 Test: blockdev comparev and writev ...passed 00:22:08.810 Test: blockdev nvme passthru rw ...passed 00:22:08.810 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.810 Test: blockdev nvme admin passthru ...passed 00:22:08.810 Test: blockdev copy ...passed 00:22:08.810 Suite: bdevio tests on: Malloc0 00:22:08.810 Test: blockdev write read block ...passed 00:22:08.810 Test: blockdev write zeroes read block ...passed 00:22:08.810 Test: blockdev write zeroes read no split ...passed 00:22:08.810 Test: blockdev write zeroes read split ...passed 00:22:08.810 Test: blockdev write zeroes read split partial ...passed 00:22:08.810 Test: blockdev reset ...passed 00:22:08.810 Test: blockdev write read 8 blocks ...passed 00:22:08.810 Test: blockdev write read size > 128k ...passed 00:22:08.810 Test: blockdev write read invalid size ...passed 00:22:08.810 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.810 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.810 Test: blockdev write read max offset ...passed 00:22:08.810 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.810 Test: blockdev writev readv 8 blocks ...passed 00:22:08.810 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.810 Test: blockdev writev readv block ...passed 00:22:08.810 Test: blockdev writev readv size > 128k ...passed 00:22:08.810 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.810 Test: blockdev comparev and writev ...passed 00:22:08.810 Test: blockdev nvme passthru rw ...passed 00:22:08.810 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.810 Test: blockdev nvme admin passthru ...passed 00:22:08.810 Test: blockdev copy ...passed 00:22:08.810 00:22:08.810 Run Summary: Type Total Ran Passed Failed Inactive 00:22:08.810 suites 16 16 n/a 0 0 00:22:08.810 tests 368 368 368 0 0 00:22:08.810 asserts 2224 2224 2224 0 n/a 00:22:08.810 00:22:08.810 Elapsed time = 2.990 seconds 00:22:08.810 0 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 51077 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 51077 ']' 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 51077 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 51077 00:22:08.810 killing process with pid 51077 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51077' 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 51077 00:22:08.810 11:15:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 51077 00:22:11.343 ************************************ 00:22:11.343 END TEST bdev_bounds 00:22:11.343 ************************************ 00:22:11.343 11:15:29 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:22:11.343 00:22:11.343 real 0m4.707s 00:22:11.343 user 0m11.824s 00:22:11.343 sys 0m0.563s 00:22:11.343 11:15:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:11.343 11:15:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:11.343 11:15:29 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:11.343 ************************************ 00:22:11.343 START TEST bdev_nbd 00:22:11.343 ************************************ 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # modprobe -q nbd nbds_max=16 00:22:11.343 ************************************ 00:22:11.343 END TEST bdev_nbd 00:22:11.343 ************************************ 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # return 0 00:22:11.343 00:22:11.343 real 0m0.010s 00:22:11.343 user 0m0.003s 00:22:11.343 sys 0m0.008s 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:11.343 11:15:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:11.343 11:15:29 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:22:11.343 11:15:29 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:22:11.343 11:15:29 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:22:11.343 11:15:29 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:11.343 11:15:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:11.343 ************************************ 00:22:11.343 START TEST bdev_fio 00:22:11.343 ************************************ 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:11.343 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:11.343 11:15:29 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:11.343 ************************************ 00:22:11.343 START TEST bdev_fio_rw_verify 00:22:11.343 ************************************ 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=(libasan libclang_rt.asan) 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:11.343 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib64/libasan.so.6 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib64/libasan.so.6 ]] 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:11.344 11:15:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.344 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.344 fio-3.35 00:22:11.344 Starting 16 threads 00:22:26.229 00:22:26.229 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=51240: Wed May 15 11:15:42 2024 00:22:26.229 read: IOPS=103k, BW=404MiB/s (423MB/s)(4048MiB/10027msec) 00:22:26.229 slat (nsec): min=906, max=47650k, avg=10485.74, stdev=153604.82 00:22:26.229 clat (usec): min=4, max=130206, avg=113.86, stdev=595.36 00:22:26.229 lat (usec): min=9, max=130222, avg=124.35, stdev=614.60 00:22:26.229 clat percentiles (usec): 00:22:26.229 | 50.000th=[ 69], 99.000th=[ 717], 99.900th=[10159], 99.990th=[22152], 00:22:26.229 | 99.999th=[47973] 00:22:26.229 write: IOPS=165k, BW=645MiB/s (677MB/s)(6436MiB/9972msec); 0 zone resets 00:22:26.229 slat (usec): min=3, max=138462, avg=63.47, stdev=1048.70 00:22:26.229 clat (usec): min=4, max=138630, avg=303.93, stdev=2004.83 00:22:26.229 lat (usec): min=20, max=138649, avg=367.40, stdev=2262.98 00:22:26.229 clat percentiles (usec): 00:22:26.229 | 50.000th=[ 119], 99.000th=[ 6390], 99.900th=[ 28967], 00:22:26.229 | 99.990th=[ 71828], 99.999th=[111674] 00:22:26.229 bw ( KiB/s): min=432692, max=914383, per=98.38%, avg=650237.05, stdev=8219.01, samples=304 00:22:26.229 iops : min=108169, max=228590, avg=162555.42, stdev=2054.74, samples=304 00:22:26.229 lat (usec) : 10=0.02%, 20=0.82%, 50=18.90%, 100=36.79%, 250=37.85% 00:22:26.229 lat (usec) : 500=2.28%, 750=1.95%, 1000=0.21% 00:22:26.229 lat (msec) : 2=0.21%, 4=0.14%, 10=0.35%, 20=0.36%, 50=0.10% 00:22:26.229 lat (msec) : 100=0.02%, 250=0.01% 00:22:26.229 cpu : usr=52.21%, sys=1.12%, ctx=19438, majf=0, minf=114106 00:22:26.229 IO depths : 1=12.4%, 2=24.7%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.229 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.229 issued rwts: total=1036168,1647727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.229 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:26.229 00:22:26.229 Run status group 0 (all jobs): 00:22:26.229 READ: bw=404MiB/s (423MB/s), 404MiB/s-404MiB/s (423MB/s-423MB/s), io=4048MiB (4244MB), run=10027-10027msec 00:22:26.229 WRITE: bw=645MiB/s (677MB/s), 645MiB/s-645MiB/s (677MB/s-677MB/s), io=6436MiB (6749MB), run=9972-9972msec 00:22:26.229 ----------------------------------------------------- 00:22:26.229 Suppressions used: 00:22:26.229 count bytes template 00:22:26.229 16 140 /usr/src/fio/parse.c 00:22:26.229 10089 968544 /usr/src/fio/iolog.c 00:22:26.229 2 596 libcrypto.so 00:22:26.229 ----------------------------------------------------- 00:22:26.229 00:22:26.229 ************************************ 00:22:26.229 END TEST bdev_fio_rw_verify 00:22:26.229 ************************************ 00:22:26.229 00:22:26.229 real 0m15.126s 00:22:26.229 user 1m36.038s 00:22:26.229 sys 0m2.416s 00:22:26.229 11:15:44 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.229 11:15:44 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:22:26.491 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:26.492 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e6241df8-5c70-444c-a5a6-bf36e5ba14e4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e6241df8-5c70-444c-a5a6-bf36e5ba14e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "deff929f-23ef-5c8c-8ca0-002f5e896243"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "deff929f-23ef-5c8c-8ca0-002f5e896243",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b856968b-3a21-56b8-98f6-9a3152ceb52b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b856968b-3a21-56b8-98f6-9a3152ceb52b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1b5d8175-0295-547a-a15a-4548869a1377"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1b5d8175-0295-547a-a15a-4548869a1377",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d5b33b60-b91e-5fe9-a947-13e8925b49c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d5b33b60-b91e-5fe9-a947-13e8925b49c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5be64d38-5a0f-563d-b4ba-ae3bba94b03c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5be64d38-5a0f-563d-b4ba-ae3bba94b03c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f02d28d8-0356-548f-af9c-4f0d43480df1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f02d28d8-0356-548f-af9c-4f0d43480df1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "38210f86-5a60-5997-8a1b-1afc859200f9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38210f86-5a60-5997-8a1b-1afc859200f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "495fe434-9473-52a5-85a3-84e46b39ccc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "495fe434-9473-52a5-85a3-84e46b39ccc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "69e5dad2-4bf8-595e-aa53-3325b304caa0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "69e5dad2-4bf8-595e-aa53-3325b304caa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "05b20028-2822-40f8-85e6-5ce9531cb40f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f916ea73-d078-4439-bc32-72194f3ad595",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "05613b33-997f-4117-b145-a7219e7b4b58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6029ef60-0937-41fb-a753-8b256a9968e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ec8dc7ee-4394-4c7a-8b8d-74ee77462520",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a94ac6b6-1b36-4b97-8064-185aeb1b5f81"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e6e25d7c-e3b0-43c9-8384-c84dbf12683c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0b0e086d-0756-4be4-ae9f-a501182ac83b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ac603c2e-c8fb-4779-a1f5-5eeb5152465a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ac603c2e-c8fb-4779-a1f5-5eeb5152465a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:22:26.492 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:22:26.492 Malloc1p0 00:22:26.492 Malloc1p1 00:22:26.492 Malloc2p0 00:22:26.492 Malloc2p1 00:22:26.492 Malloc2p2 00:22:26.492 Malloc2p3 00:22:26.492 Malloc2p4 00:22:26.492 Malloc2p5 00:22:26.492 Malloc2p6 00:22:26.492 Malloc2p7 00:22:26.492 TestPT 00:22:26.492 raid0 00:22:26.492 concat0 ]] 00:22:26.492 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:26.493 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e6241df8-5c70-444c-a5a6-bf36e5ba14e4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e6241df8-5c70-444c-a5a6-bf36e5ba14e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "deff929f-23ef-5c8c-8ca0-002f5e896243"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "deff929f-23ef-5c8c-8ca0-002f5e896243",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b856968b-3a21-56b8-98f6-9a3152ceb52b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b856968b-3a21-56b8-98f6-9a3152ceb52b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1b5d8175-0295-547a-a15a-4548869a1377"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1b5d8175-0295-547a-a15a-4548869a1377",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d5b33b60-b91e-5fe9-a947-13e8925b49c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d5b33b60-b91e-5fe9-a947-13e8925b49c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5be64d38-5a0f-563d-b4ba-ae3bba94b03c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5be64d38-5a0f-563d-b4ba-ae3bba94b03c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f02d28d8-0356-548f-af9c-4f0d43480df1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f02d28d8-0356-548f-af9c-4f0d43480df1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "38210f86-5a60-5997-8a1b-1afc859200f9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38210f86-5a60-5997-8a1b-1afc859200f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "495fe434-9473-52a5-85a3-84e46b39ccc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "495fe434-9473-52a5-85a3-84e46b39ccc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "297354c3-de71-5f9a-b2c7-d6c6fc09d3bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "69e5dad2-4bf8-595e-aa53-3325b304caa0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "69e5dad2-4bf8-595e-aa53-3325b304caa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85471d2f-16e4-56fd-b9c7-ad52fdf4cea6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f0d9b14b-94a7-4bc0-a8ed-efe82cde5424",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "05b20028-2822-40f8-85e6-5ce9531cb40f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f916ea73-d078-4439-bc32-72194f3ad595",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "05613b33-997f-4117-b145-a7219e7b4b58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05613b33-997f-4117-b145-a7219e7b4b58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6029ef60-0937-41fb-a753-8b256a9968e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ec8dc7ee-4394-4c7a-8b8d-74ee77462520",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a94ac6b6-1b36-4b97-8064-185aeb1b5f81"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a94ac6b6-1b36-4b97-8064-185aeb1b5f81",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e6e25d7c-e3b0-43c9-8384-c84dbf12683c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0b0e086d-0756-4be4-ae9f-a501182ac83b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ac603c2e-c8fb-4779-a1f5-5eeb5152465a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ac603c2e-c8fb-4779-a1f5-5eeb5152465a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:22:26.493 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.493 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:22:26.494 11:15:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.494 11:15:45 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:26.494 ************************************ 00:22:26.494 START TEST bdev_fio_trim 00:22:26.494 ************************************ 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=(libasan libclang_rt.asan) 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib=/lib64/libasan.so.6 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n /lib64/libasan.so.6 ]] 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # break 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:26.494 11:15:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.754 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.754 fio-3.35 00:22:26.754 Starting 14 threads 00:22:38.949 00:22:38.949 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=51482: Wed May 15 11:15:57 2024 00:22:38.949 write: IOPS=321k, BW=1252MiB/s (1313MB/s)(12.2GiB/10014msec); 0 zone resets 00:22:38.949 slat (nsec): min=1043, max=38020k, avg=13545.73, stdev=225817.73 00:22:38.949 clat (usec): min=9, max=41117, avg=127.92, stdev=757.99 00:22:38.949 lat (usec): min=14, max=41130, avg=141.47, stdev=790.60 00:22:38.949 clat percentiles (usec): 00:22:38.949 | 50.000th=[ 73], 99.000th=[ 725], 99.900th=[13304], 99.990th=[22152], 00:22:38.949 | 99.999th=[28705] 00:22:38.949 bw ( MiB/s): min= 746, max= 2032, per=99.24%, avg=1242.51, stdev=28.04, samples=266 00:22:38.949 iops : min=191002, max=520442, avg=318079.05, stdev=7179.04, samples=266 00:22:38.949 trim: IOPS=321k, BW=1252MiB/s (1313MB/s)(12.2GiB/10014msec); 0 zone resets 00:22:38.949 slat (nsec): min=1894, max=34024k, avg=10002.36, stdev=185652.42 00:22:38.950 clat (nsec): min=1735, max=41131k, avg=111452.51, stdev=646249.75 00:22:38.950 lat (usec): min=6, max=41139, avg=121.45, stdev=672.36 00:22:38.950 clat percentiles (usec): 00:22:38.950 | 50.000th=[ 81], 99.000th=[ 178], 99.900th=[13042], 99.990th=[21103], 00:22:38.950 | 99.999th=[28181] 00:22:38.950 bw ( MiB/s): min= 746, max= 2032, per=99.24%, avg=1242.52, stdev=28.04, samples=266 00:22:38.950 iops : min=191002, max=520432, avg=318080.68, stdev=7178.98, samples=266 00:22:38.950 lat (usec) : 2=0.01%, 4=0.01%, 10=0.28%, 20=0.79%, 50=14.83% 00:22:38.950 lat (usec) : 100=62.13%, 250=20.22%, 500=0.65%, 750=0.64%, 1000=0.19% 00:22:38.950 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.23%, 50=0.02% 00:22:38.950 cpu : usr=71.87%, sys=0.02%, ctx=8065, majf=0, minf=875 00:22:38.950 IO depths : 1=12.3%, 2=24.6%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.950 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.950 issued rwts: total=0,3209640,3209645,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.950 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.950 00:22:38.950 Run status group 0 (all jobs): 00:22:38.950 WRITE: bw=1252MiB/s (1313MB/s), 1252MiB/s-1252MiB/s (1313MB/s-1313MB/s), io=12.2GiB (13.1GB), run=10014-10014msec 00:22:38.950 TRIM: bw=1252MiB/s (1313MB/s), 1252MiB/s-1252MiB/s (1313MB/s-1313MB/s), io=12.2GiB (13.1GB), run=10014-10014msec 00:22:41.480 ----------------------------------------------------- 00:22:41.480 Suppressions used: 00:22:41.480 count bytes template 00:22:41.480 14 129 /usr/src/fio/parse.c 00:22:41.480 2 596 libcrypto.so 00:22:41.480 ----------------------------------------------------- 00:22:41.480 00:22:41.480 ************************************ 00:22:41.480 END TEST bdev_fio_trim 00:22:41.480 ************************************ 00:22:41.480 00:22:41.480 real 0m14.695s 00:22:41.480 user 1m49.958s 00:22:41.480 sys 0m0.514s 00:22:41.480 11:15:59 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.480 11:15:59 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:41.480 /home/vagrant/spdk_repo/spdk 00:22:41.480 ************************************ 00:22:41.480 END TEST bdev_fio 00:22:41.480 ************************************ 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:22:41.480 00:22:41.480 real 0m30.231s 00:22:41.480 user 3m26.189s 00:22:41.480 sys 0m3.051s 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.480 11:15:59 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:41.480 11:15:59 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:41.480 11:15:59 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:41.480 11:15:59 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:22:41.480 11:15:59 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:41.480 11:15:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:41.480 ************************************ 00:22:41.480 START TEST bdev_verify 00:22:41.480 ************************************ 00:22:41.480 11:15:59 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:41.480 [2024-05-15 11:15:59.950689] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:22:41.480 [2024-05-15 11:15:59.951162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51667 ] 00:22:41.738 [2024-05-15 11:16:00.117880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:41.738 [2024-05-15 11:16:00.356396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.738 [2024-05-15 11:16:00.356402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.305 [2024-05-15 11:16:00.779296] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:42.305 [2024-05-15 11:16:00.779431] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:42.305 [2024-05-15 11:16:00.787264] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:42.305 [2024-05-15 11:16:00.787329] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:42.305 [2024-05-15 11:16:00.795301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:42.305 [2024-05-15 11:16:00.795367] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:22:42.305 [2024-05-15 11:16:00.795430] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:22:42.563 [2024-05-15 11:16:00.968611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:42.563 [2024-05-15 11:16:00.968731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.563 [2024-05-15 11:16:00.968785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002be80 00:22:42.563 [2024-05-15 11:16:00.969012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.563 [2024-05-15 11:16:00.970795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.563 [2024-05-15 11:16:00.970861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:22:42.822 Running I/O for 5 seconds... 00:22:48.148 00:22:48.148 Latency(us) 00:22:48.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.148 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x1000 00:22:48.148 Malloc0 : 5.05 2422.95 9.46 0.00 0.00 52767.31 351.88 189696.93 00:22:48.148 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x1000 length 0x1000 00:22:48.148 Malloc0 : 5.07 1979.82 7.73 0.00 0.00 64575.04 71.21 219247.71 00:22:48.148 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x800 00:22:48.148 Malloc1p0 : 5.05 1267.51 4.95 0.00 0.00 100744.40 1169.22 96754.97 00:22:48.148 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x800 length 0x800 00:22:48.148 Malloc1p0 : 5.07 1261.19 4.93 0.00 0.00 101213.83 1146.88 97708.22 00:22:48.148 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x800 00:22:48.148 Malloc1p1 : 5.05 1267.30 4.95 0.00 0.00 100636.06 1109.64 96754.97 00:22:48.148 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x800 length 0x800 00:22:48.148 Malloc1p1 : 5.08 1260.99 4.93 0.00 0.00 101107.27 1124.54 97708.22 00:22:48.148 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p0 : 5.05 1267.09 4.95 0.00 0.00 100531.47 1124.54 96278.34 00:22:48.148 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p0 : 5.08 1260.79 4.92 0.00 0.00 101002.85 1139.43 96754.97 00:22:48.148 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p1 : 5.11 1277.01 4.99 0.00 0.00 99624.76 1184.12 95801.72 00:22:48.148 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p1 : 5.08 1260.59 4.92 0.00 0.00 100895.63 1154.33 96278.34 00:22:48.148 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p2 : 5.11 1276.77 4.99 0.00 0.00 99534.90 1087.30 95325.09 00:22:48.148 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p2 : 5.08 1260.40 4.92 0.00 0.00 100795.22 1079.85 95801.72 00:22:48.148 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p3 : 5.11 1276.53 4.99 0.00 0.00 99445.82 1139.43 95801.72 00:22:48.148 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p3 : 5.08 1260.21 4.92 0.00 0.00 100698.51 1102.20 95801.72 00:22:48.148 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p4 : 5.11 1276.29 4.99 0.00 0.00 99340.20 1064.96 95801.72 00:22:48.148 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p4 : 5.08 1260.01 4.92 0.00 0.00 100598.49 1050.07 96754.97 00:22:48.148 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p5 : 5.12 1276.06 4.98 0.00 0.00 99233.79 1087.30 96278.34 00:22:48.148 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p5 : 5.08 1259.82 4.92 0.00 0.00 100491.61 1042.62 97231.59 00:22:48.148 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p6 : 5.12 1275.83 4.98 0.00 0.00 99129.91 1176.67 96754.97 00:22:48.148 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.148 Malloc2p6 : 5.11 1276.74 4.99 0.00 0.00 99049.81 1169.22 97231.59 00:22:48.148 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x0 length 0x200 00:22:48.148 Malloc2p7 : 5.12 1275.65 4.98 0.00 0.00 99020.69 1131.99 96278.34 00:22:48.148 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.148 Verification LBA range: start 0x200 length 0x200 00:22:48.149 Malloc2p7 : 5.11 1276.51 4.99 0.00 0.00 98944.81 1094.75 96278.34 00:22:48.149 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x0 length 0x1000 00:22:48.149 TestPT : 5.12 1258.18 4.91 0.00 0.00 100069.43 6553.60 95325.09 00:22:48.149 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x1000 length 0x1000 00:22:48.149 TestPT : 5.11 1253.94 4.90 0.00 0.00 100466.37 5928.03 95801.72 00:22:48.149 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x0 length 0x2000 00:22:48.149 raid0 : 5.12 1275.41 4.98 0.00 0.00 98724.47 1273.48 88652.33 00:22:48.149 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x2000 length 0x2000 00:22:48.149 raid0 : 5.12 1276.21 4.99 0.00 0.00 98660.52 1266.04 88652.33 00:22:48.149 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x0 length 0x2000 00:22:48.149 concat0 : 5.12 1275.25 4.98 0.00 0.00 98603.68 1280.93 88175.71 00:22:48.149 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x2000 length 0x2000 00:22:48.149 concat0 : 5.12 1276.01 4.98 0.00 0.00 98536.56 1295.83 89128.96 00:22:48.149 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x0 length 0x1000 00:22:48.149 raid1 : 5.12 1275.07 4.98 0.00 0.00 98478.75 1556.48 91512.09 00:22:48.149 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x1000 length 0x1000 00:22:48.149 raid1 : 5.12 1275.79 4.98 0.00 0.00 98416.41 1444.77 92941.96 00:22:48.149 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x0 length 0x4e2 00:22:48.149 AIO0 : 5.12 1274.68 4.98 0.00 0.00 98366.02 539.93 91512.09 00:22:48.149 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.149 Verification LBA range: start 0x4e2 length 0x4e2 00:22:48.149 AIO0 : 5.12 1255.80 4.91 0.00 0.00 99751.34 4438.57 93418.59 00:22:48.149 =================================================================================================================== 00:22:48.149 Total : 42472.42 165.91 0.00 0.00 95442.64 71.21 219247.71 00:22:50.047 00:22:50.047 real 0m8.862s 00:22:50.047 user 0m15.969s 00:22:50.047 sys 0m0.581s 00:22:50.047 11:16:08 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:50.047 11:16:08 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:50.047 ************************************ 00:22:50.047 END TEST bdev_verify 00:22:50.047 ************************************ 00:22:50.305 11:16:08 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:50.305 11:16:08 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:22:50.305 11:16:08 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:50.305 11:16:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:22:50.305 ************************************ 00:22:50.305 START TEST bdev_verify_big_io 00:22:50.305 ************************************ 00:22:50.306 11:16:08 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:50.306 [2024-05-15 11:16:08.856272] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:22:50.306 [2024-05-15 11:16:08.856534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51800 ] 00:22:50.564 [2024-05-15 11:16:09.009802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:50.822 [2024-05-15 11:16:09.236669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.822 [2024-05-15 11:16:09.236673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.080 [2024-05-15 11:16:09.658939] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:51.080 [2024-05-15 11:16:09.659053] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:51.080 [2024-05-15 11:16:09.666904] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:51.080 [2024-05-15 11:16:09.666957] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:51.080 [2024-05-15 11:16:09.674912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:51.080 [2024-05-15 11:16:09.674961] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:22:51.080 [2024-05-15 11:16:09.675014] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:22:51.444 [2024-05-15 11:16:09.848308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:51.444 [2024-05-15 11:16:09.848410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.444 [2024-05-15 11:16:09.848467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002be80 00:22:51.444 [2024-05-15 11:16:09.848492] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.444 [2024-05-15 11:16:09.850671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.444 [2024-05-15 11:16:09.850709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:22:51.703 [2024-05-15 11:16:10.188069] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:22:51.703 [2024-05-15 11:16:10.191565] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:22:51.703 [2024-05-15 11:16:10.195209] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:22:51.703 [2024-05-15 11:16:10.198819] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:22:51.703 [2024-05-15 11:16:10.201939] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.205700] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.208820] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.212469] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.215779] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.219545] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.222631] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.226399] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.230106] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.233182] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.236912] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.240427] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:22:51.704 [2024-05-15 11:16:10.321179] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:22:51.704 [2024-05-15 11:16:10.327573] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:22:51.963 Running I/O for 5 seconds... 00:22:58.524 00:22:58.524 Latency(us) 00:22:58.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.524 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x100 00:22:58.524 Malloc0 : 5.34 335.33 20.96 0.00 0.00 378234.55 366.78 1021884.97 00:22:58.524 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x100 length 0x100 00:22:58.524 Malloc0 : 5.30 458.94 28.68 0.00 0.00 276087.70 344.44 1067641.02 00:22:58.524 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x80 00:22:58.524 Malloc1p0 : 5.57 173.78 10.86 0.00 0.00 702941.80 1735.21 1304047.24 00:22:58.524 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x80 length 0x80 00:22:58.524 Malloc1p0 : 5.67 79.00 4.94 0.00 0.00 1546661.55 811.75 2181038.08 00:22:58.524 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x80 00:22:58.524 Malloc1p1 : 5.67 64.96 4.06 0.00 0.00 1880167.30 882.50 2943638.81 00:22:58.524 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x80 length 0x80 00:22:58.524 Malloc1p1 : 5.71 81.22 5.08 0.00 0.00 1485999.08 837.82 2120030.02 00:22:58.524 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p0 : 5.57 48.82 3.05 0.00 0.00 622643.23 379.81 945624.90 00:22:58.524 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p0 : 5.51 66.76 4.17 0.00 0.00 453615.27 379.81 697779.67 00:22:58.524 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p1 : 5.57 48.81 3.05 0.00 0.00 620218.71 394.71 941811.90 00:22:58.524 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p1 : 5.51 66.75 4.17 0.00 0.00 452002.06 392.84 690153.66 00:22:58.524 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p2 : 5.57 48.81 3.05 0.00 0.00 618135.40 472.90 937998.89 00:22:58.524 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p2 : 5.51 66.74 4.17 0.00 0.00 450142.30 450.56 678714.65 00:22:58.524 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p3 : 5.57 48.80 3.05 0.00 0.00 615975.11 491.52 934185.89 00:22:58.524 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p3 : 5.51 66.74 4.17 0.00 0.00 448520.51 480.35 663462.63 00:22:58.524 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p4 : 5.57 48.80 3.05 0.00 0.00 613436.15 495.24 930372.89 00:22:58.524 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p4 : 5.51 66.73 4.17 0.00 0.00 446735.13 510.14 655836.63 00:22:58.524 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p5 : 5.57 48.79 3.05 0.00 0.00 610985.78 415.19 922746.88 00:22:58.524 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p5 : 5.52 66.72 4.17 0.00 0.00 445004.33 400.29 644397.61 00:22:58.524 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p6 : 5.58 48.79 3.05 0.00 0.00 608854.77 528.76 918933.88 00:22:58.524 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p6 : 5.55 69.24 4.33 0.00 0.00 429274.10 510.14 636771.61 00:22:58.524 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x20 00:22:58.524 Malloc2p7 : 5.58 48.78 3.05 0.00 0.00 606613.01 426.36 915120.87 00:22:58.524 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x20 length 0x20 00:22:58.524 Malloc2p7 : 5.55 69.23 4.33 0.00 0.00 427742.86 387.26 625332.60 00:22:58.524 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x100 00:22:58.524 TestPT : 5.75 61.94 3.87 0.00 0.00 1873096.53 46470.98 2653850.53 00:22:58.524 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x100 length 0x100 00:22:58.524 TestPT : 5.73 78.17 4.89 0.00 0.00 1479849.69 46470.98 1883623.80 00:22:58.524 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x200 00:22:58.524 raid0 : 5.70 67.33 4.21 0.00 0.00 1704822.55 863.88 2760614.63 00:22:58.524 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x200 length 0x200 00:22:58.524 raid0 : 5.67 87.41 5.46 0.00 0.00 1308939.40 875.05 1883623.80 00:22:58.524 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x200 00:22:58.524 concat0 : 5.70 72.58 4.54 0.00 0.00 1561280.27 826.65 2684354.56 00:22:58.524 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x200 length 0x200 00:22:58.524 concat0 : 5.73 92.12 5.76 0.00 0.00 1227622.99 811.75 1807363.72 00:22:58.524 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x100 00:22:58.524 raid1 : 5.73 80.96 5.06 0.00 0.00 1387005.58 1191.56 2623346.50 00:22:58.524 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x100 length 0x100 00:22:58.524 raid1 : 5.73 103.24 6.45 0.00 0.00 1087692.78 1124.54 1738729.66 00:22:58.524 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x0 length 0x4e 00:22:58.524 AIO0 : 5.78 93.06 5.82 0.00 0.00 727733.25 1027.72 1570957.50 00:22:58.524 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:22:58.524 Verification LBA range: start 0x4e length 0x4e 00:22:58.524 AIO0 : 5.78 123.29 7.71 0.00 0.00 548670.96 532.48 1006632.96 00:22:58.524 =================================================================================================================== 00:22:58.524 Total : 2982.63 186.41 0.00 0.00 770734.94 344.44 2943638.81 00:23:00.425 ************************************ 00:23:00.425 END TEST bdev_verify_big_io 00:23:00.425 ************************************ 00:23:00.425 00:23:00.425 real 0m9.942s 00:23:00.425 user 0m18.237s 00:23:00.425 sys 0m0.496s 00:23:00.425 11:16:18 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:00.425 11:16:18 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:00.425 11:16:18 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.425 11:16:18 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:00.425 11:16:18 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:00.425 11:16:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:00.425 ************************************ 00:23:00.425 START TEST bdev_write_zeroes 00:23:00.425 ************************************ 00:23:00.425 11:16:18 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.425 [2024-05-15 11:16:18.844625] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:00.425 [2024-05-15 11:16:18.845000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51946 ] 00:23:00.425 [2024-05-15 11:16:19.006163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.682 [2024-05-15 11:16:19.221077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.249 [2024-05-15 11:16:19.646964] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:01.249 [2024-05-15 11:16:19.647068] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:01.249 [2024-05-15 11:16:19.654927] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:01.249 [2024-05-15 11:16:19.654974] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:01.249 [2024-05-15 11:16:19.662968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:01.249 [2024-05-15 11:16:19.663017] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:23:01.249 [2024-05-15 11:16:19.663061] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:23:01.249 [2024-05-15 11:16:19.841569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:01.249 [2024-05-15 11:16:19.841686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.249 [2024-05-15 11:16:19.841749] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:23:01.249 [2024-05-15 11:16:19.841778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.249 [2024-05-15 11:16:19.843825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.249 [2024-05-15 11:16:19.843870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:23:01.816 Running I/O for 1 seconds... 00:23:02.751 00:23:02.751 Latency(us) 00:23:02.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.751 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc0 : 1.02 11853.05 46.30 0.00 0.00 10795.13 303.48 18350.08 00:23:02.751 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc1p0 : 1.02 11849.17 46.29 0.00 0.00 10785.96 426.36 17754.30 00:23:02.751 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc1p1 : 1.02 11845.20 46.27 0.00 0.00 10781.03 390.98 17277.67 00:23:02.751 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p0 : 1.02 11841.67 46.26 0.00 0.00 10775.48 390.98 16920.20 00:23:02.751 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p1 : 1.02 11838.06 46.24 0.00 0.00 10767.23 381.67 16681.89 00:23:02.751 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p2 : 1.02 11834.52 46.23 0.00 0.00 10761.72 415.19 16324.42 00:23:02.751 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p3 : 1.02 11830.94 46.21 0.00 0.00 10756.10 396.57 15966.95 00:23:02.751 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p4 : 1.02 11827.47 46.20 0.00 0.00 10748.26 394.71 15609.48 00:23:02.751 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p5 : 1.02 11823.95 46.19 0.00 0.00 10743.37 404.01 15132.86 00:23:02.751 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p6 : 1.02 11876.96 46.39 0.00 0.00 10686.17 415.19 14715.81 00:23:02.751 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 Malloc2p7 : 1.02 11873.11 46.38 0.00 0.00 10679.70 404.01 14298.76 00:23:02.751 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 TestPT : 1.02 11868.50 46.36 0.00 0.00 10672.42 390.98 13941.29 00:23:02.751 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 raid0 : 1.02 11863.92 46.34 0.00 0.00 10665.31 692.60 13285.93 00:23:02.751 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 concat0 : 1.03 11859.69 46.33 0.00 0.00 10653.67 703.77 12749.73 00:23:02.751 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 raid1 : 1.03 11853.69 46.30 0.00 0.00 10639.53 1094.75 12690.15 00:23:02.751 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:02.751 AIO0 : 1.03 11842.24 46.26 0.00 0.00 10625.40 934.63 12749.73 00:23:02.751 =================================================================================================================== 00:23:02.751 Total : 189582.13 740.56 0.00 0.00 10720.75 303.48 18350.08 00:23:05.277 00:23:05.277 real 0m4.660s 00:23:05.277 user 0m4.017s 00:23:05.277 sys 0m0.429s 00:23:05.277 ************************************ 00:23:05.277 END TEST bdev_write_zeroes 00:23:05.277 ************************************ 00:23:05.277 11:16:23 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.277 11:16:23 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:05.277 11:16:23 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:05.277 11:16:23 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:05.277 11:16:23 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.277 11:16:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:05.277 ************************************ 00:23:05.277 START TEST bdev_json_nonenclosed 00:23:05.277 ************************************ 00:23:05.277 11:16:23 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:05.277 [2024-05-15 11:16:23.561899] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:05.277 [2024-05-15 11:16:23.562114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52020 ] 00:23:05.277 [2024-05-15 11:16:23.734510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.536 [2024-05-15 11:16:23.983875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.536 [2024-05-15 11:16:23.984001] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:05.536 [2024-05-15 11:16:23.984042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:05.536 [2024-05-15 11:16:23.984065] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:05.794 00:23:05.794 real 0m0.942s 00:23:05.794 user 0m0.616s 00:23:05.794 sys 0m0.129s 00:23:05.794 11:16:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.794 ************************************ 00:23:05.794 END TEST bdev_json_nonenclosed 00:23:05.794 ************************************ 00:23:05.794 11:16:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:05.794 11:16:24 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:05.794 11:16:24 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:05.794 11:16:24 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.794 11:16:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:05.794 ************************************ 00:23:05.794 START TEST bdev_json_nonarray 00:23:05.794 ************************************ 00:23:05.794 11:16:24 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.052 [2024-05-15 11:16:24.553268] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:06.052 [2024-05-15 11:16:24.553483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52058 ] 00:23:06.310 [2024-05-15 11:16:24.706742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.310 [2024-05-15 11:16:24.932587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.310 [2024-05-15 11:16:24.932734] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:06.310 [2024-05-15 11:16:24.932781] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:06.310 [2024-05-15 11:16:24.932803] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:06.877 ************************************ 00:23:06.877 END TEST bdev_json_nonarray 00:23:06.877 ************************************ 00:23:06.877 00:23:06.877 real 0m0.916s 00:23:06.877 user 0m0.597s 00:23:06.877 sys 0m0.117s 00:23:06.877 11:16:25 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.877 11:16:25 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:06.877 11:16:25 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:23:06.877 11:16:25 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:23:06.877 11:16:25 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:06.877 11:16:25 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.877 11:16:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:06.877 ************************************ 00:23:06.877 START TEST bdev_qos 00:23:06.877 ************************************ 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:23:06.877 Process qos testing pid: 52096 00:23:06.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=52096 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 52096' 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 52096 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 52096 ']' 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.877 11:16:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.136 [2024-05-15 11:16:25.537995] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:07.136 [2024-05-15 11:16:25.538226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52096 ] 00:23:07.136 [2024-05-15 11:16:25.701192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.396 [2024-05-15 11:16:25.952276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.965 Malloc_0 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:23:07.965 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 [ 00:23:07.966 { 00:23:07.966 "name": "Malloc_0", 00:23:07.966 "aliases": [ 00:23:07.966 "b393726e-0d3f-4d5d-82df-fb385f58c184" 00:23:07.966 ], 00:23:07.966 "product_name": "Malloc disk", 00:23:07.966 "block_size": 512, 00:23:07.966 "num_blocks": 262144, 00:23:07.966 "uuid": "b393726e-0d3f-4d5d-82df-fb385f58c184", 00:23:07.966 "assigned_rate_limits": { 00:23:07.966 "rw_ios_per_sec": 0, 00:23:07.966 "rw_mbytes_per_sec": 0, 00:23:07.966 "r_mbytes_per_sec": 0, 00:23:07.966 "w_mbytes_per_sec": 0 00:23:07.966 }, 00:23:07.966 "claimed": false, 00:23:07.966 "zoned": false, 00:23:07.966 "supported_io_types": { 00:23:07.966 "read": true, 00:23:07.966 "write": true, 00:23:07.966 "unmap": true, 00:23:07.966 "write_zeroes": true, 00:23:07.966 "flush": true, 00:23:07.966 "reset": true, 00:23:07.966 "compare": false, 00:23:07.966 "compare_and_write": false, 00:23:07.966 "abort": true, 00:23:07.966 "nvme_admin": false, 00:23:07.966 "nvme_io": false 00:23:07.966 }, 00:23:07.966 "memory_domains": [ 00:23:07.966 { 00:23:07.966 "dma_device_id": "system", 00:23:07.966 "dma_device_type": 1 00:23:07.966 }, 00:23:07.966 { 00:23:07.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.966 "dma_device_type": 2 00:23:07.966 } 00:23:07.966 ], 00:23:07.966 "driver_specific": {} 00:23:07.966 } 00:23:07.966 ] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 Null_1 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 [ 00:23:07.966 { 00:23:07.966 "name": "Null_1", 00:23:07.966 "aliases": [ 00:23:07.966 "93eb4d3d-65e0-45a7-ad3b-fc5d08bc3769" 00:23:07.966 ], 00:23:07.966 "product_name": "Null disk", 00:23:07.966 "block_size": 512, 00:23:07.966 "num_blocks": 262144, 00:23:07.966 "uuid": "93eb4d3d-65e0-45a7-ad3b-fc5d08bc3769", 00:23:07.966 "assigned_rate_limits": { 00:23:07.966 "rw_ios_per_sec": 0, 00:23:07.966 "rw_mbytes_per_sec": 0, 00:23:07.966 "r_mbytes_per_sec": 0, 00:23:07.966 "w_mbytes_per_sec": 0 00:23:07.966 }, 00:23:07.966 "claimed": false, 00:23:07.966 "zoned": false, 00:23:07.966 "supported_io_types": { 00:23:07.966 "read": true, 00:23:07.966 "write": true, 00:23:07.966 "unmap": false, 00:23:07.966 "write_zeroes": true, 00:23:07.966 "flush": false, 00:23:07.966 "reset": true, 00:23:07.966 "compare": false, 00:23:07.966 "compare_and_write": false, 00:23:07.966 "abort": true, 00:23:07.966 "nvme_admin": false, 00:23:07.966 "nvme_io": false 00:23:07.966 }, 00:23:07.966 "driver_specific": {} 00:23:07.966 } 00:23:07.966 ] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:07.966 11:16:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:23:08.224 Running I/O for 60 seconds... 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 160681.94 642727.77 0.00 0.00 651264.00 0.00 0.00 ' 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=160681.94 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 160681 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=160681 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=40000 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 40000 -gt 1000 ']' 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 40000 Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 40000 IOPS Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:13.492 11:16:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:13.492 ************************************ 00:23:13.492 START TEST bdev_qos_iops 00:23:13.492 ************************************ 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 40000 IOPS Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=40000 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:13.492 11:16:31 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 39962.75 159851.01 0.00 0.00 161440.00 0.00 0.00 ' 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=39962.75 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 39962 00:23:18.765 ************************************ 00:23:18.765 END TEST bdev_qos_iops 00:23:18.765 ************************************ 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=39962 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=36000 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=44000 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 39962 -lt 36000 ']' 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 39962 -gt 44000 ']' 00:23:18.765 00:23:18.765 real 0m5.185s 00:23:18.765 user 0m0.114s 00:23:18.765 sys 0m0.021s 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.765 11:16:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:18.765 11:16:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 44536.01 178144.04 0.00 0.00 180224.00 0.00 0.00 ' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=180224.00 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 180224 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=180224 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=17 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 17 -lt 2 ']' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 17 Null_1 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 17 BANDWIDTH Null_1 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.061 11:16:42 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:24.061 ************************************ 00:23:24.061 START TEST bdev_qos_bw 00:23:24.061 ************************************ 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 17 BANDWIDTH Null_1 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=17 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:23:24.061 11:16:42 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 4348.52 17394.09 0.00 0.00 17632.00 0.00 0.00 ' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=17632.00 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 17632 00:23:29.327 ************************************ 00:23:29.327 END TEST bdev_qos_bw 00:23:29.327 ************************************ 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=17632 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=17408 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=15667 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=19148 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 17632 -lt 15667 ']' 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 17632 -gt 19148 ']' 00:23:29.327 00:23:29.327 real 0m5.201s 00:23:29.327 user 0m0.120s 00:23:29.327 sys 0m0.024s 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:29.327 11:16:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:29.327 ************************************ 00:23:29.327 START TEST bdev_qos_ro_bw 00:23:29.327 ************************************ 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:29.327 11:16:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.95 2047.80 0.00 0.00 2068.00 0.00 0.00 ' 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:23:34.662 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:23:34.663 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:23:34.663 00:23:34.663 real 0m5.185s 00:23:34.663 user 0m0.116s 00:23:34.663 sys 0m0.035s 00:23:34.663 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:34.663 11:16:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 ************************************ 00:23:34.663 END TEST bdev_qos_ro_bw 00:23:34.663 ************************************ 00:23:34.663 11:16:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:23:34.663 11:16:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 11:16:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:34.921 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.921 11:16:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:34.922 00:23:34.922 Latency(us) 00:23:34.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.922 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:34.922 Malloc_0 : 26.63 54252.90 211.93 0.00 0.00 4675.70 1124.54 503316.48 00:23:34.922 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:34.922 Null_1 : 26.81 49251.76 192.39 0.00 0.00 5189.89 351.88 170631.91 00:23:34.922 =================================================================================================================== 00:23:34.922 Total : 103504.66 404.32 0.00 0.00 4921.23 351.88 503316.48 00:23:34.922 0 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 52096 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 52096 ']' 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 52096 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52096 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52096' 00:23:34.922 killing process with pid 52096 00:23:34.922 Received shutdown signal, test time was about 26.849896 seconds 00:23:34.922 00:23:34.922 Latency(us) 00:23:34.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.922 =================================================================================================================== 00:23:34.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 52096 00:23:34.922 11:16:53 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 52096 00:23:36.820 ************************************ 00:23:36.820 END TEST bdev_qos 00:23:36.820 ************************************ 00:23:36.820 11:16:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:23:36.820 00:23:36.820 real 0m29.591s 00:23:36.820 user 0m30.102s 00:23:36.820 sys 0m0.744s 00:23:36.820 11:16:54 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:36.820 11:16:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:23:36.820 11:16:55 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:23:36.820 11:16:55 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:36.820 11:16:55 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:36.820 11:16:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:36.820 ************************************ 00:23:36.820 START TEST bdev_qd_sampling 00:23:36.820 ************************************ 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:23:36.820 Process bdev QD sampling period testing pid: 52577 00:23:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=52577 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 52577' 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 52577 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 52577 ']' 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.820 11:16:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:36.820 [2024-05-15 11:16:55.168044] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:36.820 [2024-05-15 11:16:55.168247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52577 ] 00:23:36.820 [2024-05-15 11:16:55.335600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:37.079 [2024-05-15 11:16:55.587381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.079 [2024-05-15 11:16:55.587386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.644 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.644 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:23:37.644 11:16:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:23:37.644 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.644 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:37.645 Malloc_QD 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:37.645 [ 00:23:37.645 { 00:23:37.645 "name": "Malloc_QD", 00:23:37.645 "aliases": [ 00:23:37.645 "8c71c027-e33e-48a4-8af3-7d05b3b16038" 00:23:37.645 ], 00:23:37.645 "product_name": "Malloc disk", 00:23:37.645 "block_size": 512, 00:23:37.645 "num_blocks": 262144, 00:23:37.645 "uuid": "8c71c027-e33e-48a4-8af3-7d05b3b16038", 00:23:37.645 "assigned_rate_limits": { 00:23:37.645 "rw_ios_per_sec": 0, 00:23:37.645 "rw_mbytes_per_sec": 0, 00:23:37.645 "r_mbytes_per_sec": 0, 00:23:37.645 "w_mbytes_per_sec": 0 00:23:37.645 }, 00:23:37.645 "claimed": false, 00:23:37.645 "zoned": false, 00:23:37.645 "supported_io_types": { 00:23:37.645 "read": true, 00:23:37.645 "write": true, 00:23:37.645 "unmap": true, 00:23:37.645 "write_zeroes": true, 00:23:37.645 "flush": true, 00:23:37.645 "reset": true, 00:23:37.645 "compare": false, 00:23:37.645 "compare_and_write": false, 00:23:37.645 "abort": true, 00:23:37.645 "nvme_admin": false, 00:23:37.645 "nvme_io": false 00:23:37.645 }, 00:23:37.645 "memory_domains": [ 00:23:37.645 { 00:23:37.645 "dma_device_id": "system", 00:23:37.645 "dma_device_type": 1 00:23:37.645 }, 00:23:37.645 { 00:23:37.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.645 "dma_device_type": 2 00:23:37.645 } 00:23:37.645 ], 00:23:37.645 "driver_specific": {} 00:23:37.645 } 00:23:37.645 ] 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:23:37.645 11:16:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:37.935 Running I/O for 5 seconds... 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:23:39.847 "tick_rate": 2200000000, 00:23:39.847 "ticks": 1426810322546, 00:23:39.847 "bdevs": [ 00:23:39.847 { 00:23:39.847 "name": "Malloc_QD", 00:23:39.847 "bytes_read": 1953534464, 00:23:39.847 "num_read_ops": 476931, 00:23:39.847 "bytes_written": 0, 00:23:39.847 "num_write_ops": 0, 00:23:39.847 "bytes_unmapped": 0, 00:23:39.847 "num_unmap_ops": 0, 00:23:39.847 "bytes_copied": 0, 00:23:39.847 "num_copy_ops": 0, 00:23:39.847 "read_latency_ticks": 2138984652944, 00:23:39.847 "max_read_latency_ticks": 8271985, 00:23:39.847 "min_read_latency_ticks": 258095, 00:23:39.847 "write_latency_ticks": 0, 00:23:39.847 "max_write_latency_ticks": 0, 00:23:39.847 "min_write_latency_ticks": 0, 00:23:39.847 "unmap_latency_ticks": 0, 00:23:39.847 "max_unmap_latency_ticks": 0, 00:23:39.847 "min_unmap_latency_ticks": 0, 00:23:39.847 "copy_latency_ticks": 0, 00:23:39.847 "max_copy_latency_ticks": 0, 00:23:39.847 "min_copy_latency_ticks": 0, 00:23:39.847 "io_error": {}, 00:23:39.847 "queue_depth_polling_period": 10, 00:23:39.847 "queue_depth": 512, 00:23:39.847 "io_time": 60, 00:23:39.847 "weighted_io_time": 30720 00:23:39.847 } 00:23:39.847 ] 00:23:39.847 }' 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.847 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:39.848 00:23:39.848 Latency(us) 00:23:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.848 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:23:39.848 Malloc_QD : 1.98 124784.57 487.44 0.00 0.00 2047.90 487.80 3768.32 00:23:39.848 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:39.848 Malloc_QD : 1.98 125921.77 491.88 0.00 0.00 2029.33 312.79 2487.39 00:23:39.848 =================================================================================================================== 00:23:39.848 Total : 250706.34 979.32 0.00 0.00 2038.57 312.79 3768.32 00:23:39.848 0 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 52577 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 52577 ']' 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 52577 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52577 00:23:39.848 killing process with pid 52577 00:23:39.848 Received shutdown signal, test time was about 2.120682 seconds 00:23:39.848 00:23:39.848 Latency(us) 00:23:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.848 =================================================================================================================== 00:23:39.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52577' 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 52577 00:23:39.848 11:16:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 52577 00:23:41.221 ************************************ 00:23:41.221 END TEST bdev_qd_sampling 00:23:41.221 ************************************ 00:23:41.221 11:16:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:23:41.221 00:23:41.221 real 0m4.831s 00:23:41.221 user 0m8.810s 00:23:41.221 sys 0m0.372s 00:23:41.221 11:16:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.221 11:16:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 11:16:59 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:23:41.480 11:16:59 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:41.480 11:16:59 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.480 11:16:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 ************************************ 00:23:41.480 START TEST bdev_error 00:23:41.480 ************************************ 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:23:41.480 Process error testing pid: 52675 00:23:41.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=52675 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 52675' 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 52675 00:23:41.480 11:16:59 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 52675 ']' 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.480 11:16:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 [2024-05-15 11:17:00.035373] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:41.480 [2024-05-15 11:17:00.035536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52675 ] 00:23:41.802 [2024-05-15 11:17:00.189302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.060 [2024-05-15 11:17:00.443626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.319 11:17:00 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:42.319 11:17:00 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:23:42.319 11:17:00 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:23:42.319 11:17:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.319 11:17:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 Dev_1 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 [ 00:23:42.578 { 00:23:42.578 "name": "Dev_1", 00:23:42.578 "aliases": [ 00:23:42.578 "98488e7e-a78b-4dac-afb9-74c8647160d6" 00:23:42.578 ], 00:23:42.578 "product_name": "Malloc disk", 00:23:42.578 "block_size": 512, 00:23:42.578 "num_blocks": 262144, 00:23:42.578 "uuid": "98488e7e-a78b-4dac-afb9-74c8647160d6", 00:23:42.578 "assigned_rate_limits": { 00:23:42.578 "rw_ios_per_sec": 0, 00:23:42.578 "rw_mbytes_per_sec": 0, 00:23:42.578 "r_mbytes_per_sec": 0, 00:23:42.578 "w_mbytes_per_sec": 0 00:23:42.578 }, 00:23:42.578 "claimed": false, 00:23:42.578 "zoned": false, 00:23:42.578 "supported_io_types": { 00:23:42.578 "read": true, 00:23:42.578 "write": true, 00:23:42.578 "unmap": true, 00:23:42.578 "write_zeroes": true, 00:23:42.578 "flush": true, 00:23:42.578 "reset": true, 00:23:42.578 "compare": false, 00:23:42.578 "compare_and_write": false, 00:23:42.578 "abort": true, 00:23:42.578 "nvme_admin": false, 00:23:42.578 "nvme_io": false 00:23:42.578 }, 00:23:42.578 "memory_domains": [ 00:23:42.578 { 00:23:42.578 "dma_device_id": "system", 00:23:42.578 "dma_device_type": 1 00:23:42.578 }, 00:23:42.578 { 00:23:42.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.578 "dma_device_type": 2 00:23:42.578 } 00:23:42.578 ], 00:23:42.578 "driver_specific": {} 00:23:42.578 } 00:23:42.578 ] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:23:42.578 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 true 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 Dev_2 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.578 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.578 [ 00:23:42.578 { 00:23:42.578 "name": "Dev_2", 00:23:42.578 "aliases": [ 00:23:42.578 "29df8475-e308-47b1-b4f4-3f0316b34e36" 00:23:42.578 ], 00:23:42.578 "product_name": "Malloc disk", 00:23:42.578 "block_size": 512, 00:23:42.578 "num_blocks": 262144, 00:23:42.578 "uuid": "29df8475-e308-47b1-b4f4-3f0316b34e36", 00:23:42.578 "assigned_rate_limits": { 00:23:42.578 "rw_ios_per_sec": 0, 00:23:42.578 "rw_mbytes_per_sec": 0, 00:23:42.578 "r_mbytes_per_sec": 0, 00:23:42.578 "w_mbytes_per_sec": 0 00:23:42.578 }, 00:23:42.578 "claimed": false, 00:23:42.578 "zoned": false, 00:23:42.578 "supported_io_types": { 00:23:42.578 "read": true, 00:23:42.578 "write": true, 00:23:42.578 "unmap": true, 00:23:42.578 "write_zeroes": true, 00:23:42.578 "flush": true, 00:23:42.579 "reset": true, 00:23:42.579 "compare": false, 00:23:42.579 "compare_and_write": false, 00:23:42.579 "abort": true, 00:23:42.579 "nvme_admin": false, 00:23:42.579 "nvme_io": false 00:23:42.579 }, 00:23:42.579 "memory_domains": [ 00:23:42.579 { 00:23:42.579 "dma_device_id": "system", 00:23:42.579 "dma_device_type": 1 00:23:42.579 }, 00:23:42.579 { 00:23:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.579 "dma_device_type": 2 00:23:42.579 } 00:23:42.579 ], 00:23:42.579 "driver_specific": {} 00:23:42.579 } 00:23:42.579 ] 00:23:42.579 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.579 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:23:42.579 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:23:42.579 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.579 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.579 11:17:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.579 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:23:42.579 11:17:01 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:23:42.837 Running I/O for 5 seconds... 00:23:43.771 Process is existed as continue on error is set. Pid: 52675 00:23:43.771 11:17:02 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 52675 00:23:43.771 11:17:02 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 52675' 00:23:43.771 11:17:02 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:23:43.771 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.771 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.771 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.771 11:17:02 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:23:43.771 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.771 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.771 Timeout while waiting for response: 00:23:43.771 00:23:43.771 00:23:44.029 11:17:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.029 11:17:02 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:23:48.295 00:23:48.295 Latency(us) 00:23:48.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.295 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:23:48.295 EE_Dev_1 : 0.91 94250.87 368.17 5.52 0.00 168.50 95.42 595.78 00:23:48.295 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:23:48.295 Dev_2 : 5.00 187672.32 733.10 0.00 0.00 84.01 26.30 322198.81 00:23:48.295 =================================================================================================================== 00:23:48.295 Total : 281923.20 1101.26 5.52 0.00 91.06 26.30 322198.81 00:23:49.231 11:17:07 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 52675 00:23:49.231 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 52675 ']' 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 52675 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52675 00:23:49.232 killing process with pid 52675 00:23:49.232 Received shutdown signal, test time was about 5.000000 seconds 00:23:49.232 00:23:49.232 Latency(us) 00:23:49.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.232 =================================================================================================================== 00:23:49.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52675' 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 52675 00:23:49.232 11:17:07 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 52675 00:23:50.607 Process error testing pid: 52799 00:23:50.607 11:17:09 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=52799 00:23:50.607 11:17:09 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 52799' 00:23:50.607 11:17:09 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 52799 00:23:50.607 11:17:09 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 52799 ']' 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.607 11:17:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:50.607 [2024-05-15 11:17:09.235531] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:50.607 [2024-05-15 11:17:09.235731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52799 ] 00:23:50.866 [2024-05-15 11:17:09.401066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.152 [2024-05-15 11:17:09.617341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:23:51.722 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.722 Dev_1 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.722 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.722 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.722 [ 00:23:51.722 { 00:23:51.722 "name": "Dev_1", 00:23:51.722 "aliases": [ 00:23:51.723 "b3f1e143-f485-4d04-baa2-94f16c245b40" 00:23:51.723 ], 00:23:51.723 "product_name": "Malloc disk", 00:23:51.723 "block_size": 512, 00:23:51.723 "num_blocks": 262144, 00:23:51.723 "uuid": "b3f1e143-f485-4d04-baa2-94f16c245b40", 00:23:51.723 "assigned_rate_limits": { 00:23:51.723 "rw_ios_per_sec": 0, 00:23:51.723 "rw_mbytes_per_sec": 0, 00:23:51.723 "r_mbytes_per_sec": 0, 00:23:51.723 "w_mbytes_per_sec": 0 00:23:51.723 }, 00:23:51.723 "claimed": false, 00:23:51.723 "zoned": false, 00:23:51.723 "supported_io_types": { 00:23:51.723 "read": true, 00:23:51.723 "write": true, 00:23:51.723 "unmap": true, 00:23:51.723 "write_zeroes": true, 00:23:51.723 "flush": true, 00:23:51.723 "reset": true, 00:23:51.723 "compare": false, 00:23:51.723 "compare_and_write": false, 00:23:51.723 "abort": true, 00:23:51.723 "nvme_admin": false, 00:23:51.723 "nvme_io": false 00:23:51.723 }, 00:23:51.723 "memory_domains": [ 00:23:51.723 { 00:23:51.723 "dma_device_id": "system", 00:23:51.723 "dma_device_type": 1 00:23:51.723 }, 00:23:51.723 { 00:23:51.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.723 "dma_device_type": 2 00:23:51.723 } 00:23:51.723 ], 00:23:51.723 "driver_specific": {} 00:23:51.723 } 00:23:51.723 ] 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:23:51.723 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.723 true 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.723 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.723 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 Dev_2 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 [ 00:23:51.981 { 00:23:51.981 "name": "Dev_2", 00:23:51.981 "aliases": [ 00:23:51.981 "2d5663a6-3924-427a-b793-25d85bdea192" 00:23:51.981 ], 00:23:51.981 "product_name": "Malloc disk", 00:23:51.981 "block_size": 512, 00:23:51.981 "num_blocks": 262144, 00:23:51.981 "uuid": "2d5663a6-3924-427a-b793-25d85bdea192", 00:23:51.981 "assigned_rate_limits": { 00:23:51.981 "rw_ios_per_sec": 0, 00:23:51.981 "rw_mbytes_per_sec": 0, 00:23:51.981 "r_mbytes_per_sec": 0, 00:23:51.981 "w_mbytes_per_sec": 0 00:23:51.981 }, 00:23:51.981 "claimed": false, 00:23:51.981 "zoned": false, 00:23:51.981 "supported_io_types": { 00:23:51.981 "read": true, 00:23:51.981 "write": true, 00:23:51.981 "unmap": true, 00:23:51.981 "write_zeroes": true, 00:23:51.981 "flush": true, 00:23:51.981 "reset": true, 00:23:51.981 "compare": false, 00:23:51.981 "compare_and_write": false, 00:23:51.981 "abort": true, 00:23:51.981 "nvme_admin": false, 00:23:51.981 "nvme_io": false 00:23:51.981 }, 00:23:51.981 "memory_domains": [ 00:23:51.981 { 00:23:51.981 "dma_device_id": "system", 00:23:51.981 "dma_device_type": 1 00:23:51.981 }, 00:23:51.981 { 00:23:51.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.981 "dma_device_type": 2 00:23:51.981 } 00:23:51.981 ], 00:23:51.981 "driver_specific": {} 00:23:51.981 } 00:23:51.981 ] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:23:51.981 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.981 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 52799 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 52799 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:23:51.981 11:17:10 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.981 11:17:10 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 52799 00:23:51.981 Running I/O for 5 seconds... 00:23:51.981 task offset: 246192 on job bdev=EE_Dev_1 fails 00:23:51.981 00:23:51.981 Latency(us) 00:23:51.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.981 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:23:51.981 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:23:51.981 EE_Dev_1 : 0.00 27707.81 108.23 6297.23 0.00 417.65 67.96 729.83 00:23:51.981 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:23:51.981 Dev_2 : 0.00 32753.33 127.94 0.00 0.00 339.87 65.16 629.29 00:23:51.981 =================================================================================================================== 00:23:51.981 Total : 60461.14 236.18 6297.23 0.00 375.46 65.16 729.83 00:23:51.981 [2024-05-15 11:17:10.510576] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:52.239 request: 00:23:52.239 { 00:23:52.239 "method": "perform_tests", 00:23:52.239 "req_id": 1 00:23:52.239 } 00:23:52.239 Got JSON-RPC error response 00:23:52.239 response: 00:23:52.239 { 00:23:52.239 "code": -32603, 00:23:52.239 "message": "bdevperf failed with error Operation not permitted" 00:23:52.239 } 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:54.140 00:23:54.140 real 0m12.485s 00:23:54.140 user 0m12.403s 00:23:54.140 sys 0m0.851s 00:23:54.140 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.140 ************************************ 00:23:54.141 END TEST bdev_error 00:23:54.141 ************************************ 00:23:54.141 11:17:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.141 11:17:12 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:23:54.141 11:17:12 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:54.141 11:17:12 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.141 11:17:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:54.141 ************************************ 00:23:54.141 START TEST bdev_stat 00:23:54.141 ************************************ 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:23:54.141 Process Bdev IO statistics testing pid: 52862 00:23:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=52862 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 52862' 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 52862 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 52862 ']' 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.141 11:17:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:54.141 [2024-05-15 11:17:12.570265] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:54.141 [2024-05-15 11:17:12.570534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52862 ] 00:23:54.141 [2024-05-15 11:17:12.736933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:54.399 [2024-05-15 11:17:13.028985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.399 [2024-05-15 11:17:13.028991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:54.971 Malloc_STAT 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:23:54.971 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:54.972 [ 00:23:54.972 { 00:23:54.972 "name": "Malloc_STAT", 00:23:54.972 "aliases": [ 00:23:54.972 "d3dc6919-eecd-4fe3-b8a1-7417429799f7" 00:23:54.972 ], 00:23:54.972 "product_name": "Malloc disk", 00:23:54.972 "block_size": 512, 00:23:54.972 "num_blocks": 262144, 00:23:54.972 "uuid": "d3dc6919-eecd-4fe3-b8a1-7417429799f7", 00:23:54.972 "assigned_rate_limits": { 00:23:54.972 "rw_ios_per_sec": 0, 00:23:54.972 "rw_mbytes_per_sec": 0, 00:23:54.972 "r_mbytes_per_sec": 0, 00:23:54.972 "w_mbytes_per_sec": 0 00:23:54.972 }, 00:23:54.972 "claimed": false, 00:23:54.972 "zoned": false, 00:23:54.972 "supported_io_types": { 00:23:54.972 "read": true, 00:23:54.972 "write": true, 00:23:54.972 "unmap": true, 00:23:54.972 "write_zeroes": true, 00:23:54.972 "flush": true, 00:23:54.972 "reset": true, 00:23:54.972 "compare": false, 00:23:54.972 "compare_and_write": false, 00:23:54.972 "abort": true, 00:23:54.972 "nvme_admin": false, 00:23:54.972 "nvme_io": false 00:23:54.972 }, 00:23:54.972 "memory_domains": [ 00:23:54.972 { 00:23:54.972 "dma_device_id": "system", 00:23:54.972 "dma_device_type": 1 00:23:54.972 }, 00:23:54.972 { 00:23:54.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.972 "dma_device_type": 2 00:23:54.972 } 00:23:54.972 ], 00:23:54.972 "driver_specific": {} 00:23:54.972 } 00:23:54.972 ] 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:23:54.972 11:17:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:55.231 Running I/O for 10 seconds... 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:23:57.133 "tick_rate": 2200000000, 00:23:57.133 "ticks": 1465059802340, 00:23:57.133 "bdevs": [ 00:23:57.133 { 00:23:57.133 "name": "Malloc_STAT", 00:23:57.133 "bytes_read": 1888522752, 00:23:57.133 "num_read_ops": 461059, 00:23:57.133 "bytes_written": 0, 00:23:57.133 "num_write_ops": 0, 00:23:57.133 "bytes_unmapped": 0, 00:23:57.133 "num_unmap_ops": 0, 00:23:57.133 "bytes_copied": 0, 00:23:57.133 "num_copy_ops": 0, 00:23:57.133 "read_latency_ticks": 2129470938149, 00:23:57.133 "max_read_latency_ticks": 9875136, 00:23:57.133 "min_read_latency_ticks": 297767, 00:23:57.133 "write_latency_ticks": 0, 00:23:57.133 "max_write_latency_ticks": 0, 00:23:57.133 "min_write_latency_ticks": 0, 00:23:57.133 "unmap_latency_ticks": 0, 00:23:57.133 "max_unmap_latency_ticks": 0, 00:23:57.133 "min_unmap_latency_ticks": 0, 00:23:57.133 "copy_latency_ticks": 0, 00:23:57.133 "max_copy_latency_ticks": 0, 00:23:57.133 "min_copy_latency_ticks": 0, 00:23:57.133 "io_error": {} 00:23:57.133 } 00:23:57.133 ] 00:23:57.133 }' 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=461059 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:23:57.133 "tick_rate": 2200000000, 00:23:57.133 "ticks": 1465239495113, 00:23:57.133 "name": "Malloc_STAT", 00:23:57.133 "channels": [ 00:23:57.133 { 00:23:57.133 "thread_id": 2, 00:23:57.133 "bytes_read": 1000341504, 00:23:57.133 "num_read_ops": 244224, 00:23:57.133 "bytes_written": 0, 00:23:57.133 "num_write_ops": 0, 00:23:57.133 "bytes_unmapped": 0, 00:23:57.133 "num_unmap_ops": 0, 00:23:57.133 "bytes_copied": 0, 00:23:57.133 "num_copy_ops": 0, 00:23:57.133 "read_latency_ticks": 1110572201744, 00:23:57.133 "max_read_latency_ticks": 6096200, 00:23:57.133 "min_read_latency_ticks": 3295182, 00:23:57.133 "write_latency_ticks": 0, 00:23:57.133 "max_write_latency_ticks": 0, 00:23:57.133 "min_write_latency_ticks": 0, 00:23:57.133 "unmap_latency_ticks": 0, 00:23:57.133 "max_unmap_latency_ticks": 0, 00:23:57.133 "min_unmap_latency_ticks": 0, 00:23:57.133 "copy_latency_ticks": 0, 00:23:57.133 "max_copy_latency_ticks": 0, 00:23:57.133 "min_copy_latency_ticks": 0 00:23:57.133 }, 00:23:57.133 { 00:23:57.133 "thread_id": 3, 00:23:57.133 "bytes_read": 966787072, 00:23:57.133 "num_read_ops": 236032, 00:23:57.133 "bytes_written": 0, 00:23:57.133 "num_write_ops": 0, 00:23:57.133 "bytes_unmapped": 0, 00:23:57.133 "num_unmap_ops": 0, 00:23:57.133 "bytes_copied": 0, 00:23:57.133 "num_copy_ops": 0, 00:23:57.133 "read_latency_ticks": 1111476069950, 00:23:57.133 "max_read_latency_ticks": 9875136, 00:23:57.133 "min_read_latency_ticks": 3884096, 00:23:57.133 "write_latency_ticks": 0, 00:23:57.133 "max_write_latency_ticks": 0, 00:23:57.133 "min_write_latency_ticks": 0, 00:23:57.133 "unmap_latency_ticks": 0, 00:23:57.133 "max_unmap_latency_ticks": 0, 00:23:57.133 "min_unmap_latency_ticks": 0, 00:23:57.133 "copy_latency_ticks": 0, 00:23:57.133 "max_copy_latency_ticks": 0, 00:23:57.133 "min_copy_latency_ticks": 0 00:23:57.133 } 00:23:57.133 ] 00:23:57.133 }' 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=244224 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=244224 00:23:57.133 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=236032 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=480256 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.392 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:23:57.392 "tick_rate": 2200000000, 00:23:57.392 "ticks": 1465553479911, 00:23:57.392 "bdevs": [ 00:23:57.392 { 00:23:57.392 "name": "Malloc_STAT", 00:23:57.392 "bytes_read": 2105577984, 00:23:57.392 "num_read_ops": 514051, 00:23:57.392 "bytes_written": 0, 00:23:57.392 "num_write_ops": 0, 00:23:57.392 "bytes_unmapped": 0, 00:23:57.392 "num_unmap_ops": 0, 00:23:57.392 "bytes_copied": 0, 00:23:57.392 "num_copy_ops": 0, 00:23:57.392 "read_latency_ticks": 2382619647111, 00:23:57.392 "max_read_latency_ticks": 9875136, 00:23:57.392 "min_read_latency_ticks": 297767, 00:23:57.392 "write_latency_ticks": 0, 00:23:57.393 "max_write_latency_ticks": 0, 00:23:57.393 "min_write_latency_ticks": 0, 00:23:57.393 "unmap_latency_ticks": 0, 00:23:57.393 "max_unmap_latency_ticks": 0, 00:23:57.393 "min_unmap_latency_ticks": 0, 00:23:57.393 "copy_latency_ticks": 0, 00:23:57.393 "max_copy_latency_ticks": 0, 00:23:57.393 "min_copy_latency_ticks": 0, 00:23:57.393 "io_error": {} 00:23:57.393 } 00:23:57.393 ] 00:23:57.393 }' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=514051 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 480256 -lt 461059 ']' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 480256 -gt 514051 ']' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:57.393 00:23:57.393 Latency(us) 00:23:57.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.393 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:23:57.393 Malloc_STAT : 2.19 123832.17 483.72 0.00 0.00 2063.89 688.87 2785.28 00:23:57.393 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:57.393 Malloc_STAT : 2.19 119040.15 465.00 0.00 0.00 2146.80 655.36 4498.15 00:23:57.393 =================================================================================================================== 00:23:57.393 Total : 242872.33 948.72 0.00 0.00 2104.52 655.36 4498.15 00:23:57.393 0 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 52862 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 52862 ']' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 52862 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:57.393 11:17:15 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52862 00:23:57.393 killing process with pid 52862 00:23:57.393 11:17:16 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:57.393 11:17:16 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:57.393 11:17:16 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52862' 00:23:57.393 11:17:16 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 52862 00:23:57.393 Received shutdown signal, test time was about 2.327352 seconds 00:23:57.393 00:23:57.393 Latency(us) 00:23:57.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.393 =================================================================================================================== 00:23:57.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.393 11:17:16 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 52862 00:23:59.295 ************************************ 00:23:59.295 END TEST bdev_stat 00:23:59.295 ************************************ 00:23:59.295 11:17:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:23:59.295 00:23:59.295 real 0m5.015s 00:23:59.295 user 0m9.288s 00:23:59.295 sys 0m0.419s 00:23:59.295 11:17:17 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:59.295 11:17:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:59.295 ************************************ 00:23:59.295 END TEST blockdev_general 00:23:59.295 ************************************ 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:23:59.295 11:17:17 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:23:59.295 00:23:59.295 real 2m2.243s 00:23:59.295 user 5m26.833s 00:23:59.295 sys 0m9.249s 00:23:59.295 11:17:17 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:59.295 11:17:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:23:59.295 11:17:17 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:23:59.295 11:17:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:59.295 11:17:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:59.295 11:17:17 -- common/autotest_common.sh@10 -- # set +x 00:23:59.295 ************************************ 00:23:59.295 START TEST bdev_raid 00:23:59.295 ************************************ 00:23:59.295 11:17:17 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:23:59.295 * Looking for test storage... 00:23:59.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:59.296 11:17:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@800 -- # trap 'on_error_exit;' ERR 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@802 -- # base_blocklen=512 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@804 -- # uname -s 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@804 -- # '[' Linux = Linux ']' 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@804 -- # modprobe -n nbd 00:23:59.296 modprobe: FATAL: Module nbd not found. 00:23:59.296 11:17:17 bdev_raid -- bdev/bdev_raid.sh@811 -- # run_test raid0_resize_test raid0_resize_test 00:23:59.296 11:17:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:59.296 11:17:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:59.296 11:17:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.296 ************************************ 00:23:59.296 START TEST raid0_resize_test 00:23:59.296 ************************************ 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=53029 00:23:59.296 Process raid pid: 53029 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 53029' 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 53029 /var/tmp/spdk-raid.sock 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 53029 ']' 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:59.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:59.296 11:17:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.296 [2024-05-15 11:17:17.799578] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:23:59.296 [2024-05-15 11:17:17.799786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.554 [2024-05-15 11:17:17.963260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.554 [2024-05-15 11:17:18.182321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.812 [2024-05-15 11:17:18.385024] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:00.070 11:17:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:00.070 11:17:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:24:00.070 11:17:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:24:00.329 Base_1 00:24:00.329 11:17:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:24:00.588 Base_2 00:24:00.588 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:24:00.845 [2024-05-15 11:17:19.246663] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:00.845 [2024-05-15 11:17:19.248340] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:00.845 [2024-05-15 11:17:19.248398] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:00.845 [2024-05-15 11:17:19.248411] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:00.845 [2024-05-15 11:17:19.248560] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:24:00.845 [2024-05-15 11:17:19.248802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:00.845 [2024-05-15 11:17:19.248817] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000011500 00:24:00.845 [2024-05-15 11:17:19.248968] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.845 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:24:00.845 [2024-05-15 11:17:19.434624] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:00.845 [2024-05-15 11:17:19.434658] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:24:00.845 true 00:24:00.845 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:24:00.845 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:24:01.104 [2024-05-15 11:17:19.670736] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.104 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:24:01.104 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:24:01.104 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:24:01.104 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:24:01.387 [2024-05-15 11:17:19.870679] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:01.387 [2024-05-15 11:17:19.870715] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:24:01.387 [2024-05-15 11:17:19.870776] bdev_raid.c:2243:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:24:01.387 true 00:24:01.387 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:24:01.387 11:17:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:24:01.645 [2024-05-15 11:17:20.138815] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 53029 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 53029 ']' 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 53029 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53029 00:24:01.645 killing process with pid 53029 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53029' 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 53029 00:24:01.645 11:17:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 53029 00:24:01.645 [2024-05-15 11:17:20.178403] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.645 [2024-05-15 11:17:20.178505] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.645 [2024-05-15 11:17:20.178541] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.645 [2024-05-15 11:17:20.178561] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Raid, state offline 00:24:01.645 [2024-05-15 11:17:20.179042] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:03.018 ************************************ 00:24:03.018 END TEST raid0_resize_test 00:24:03.018 ************************************ 00:24:03.018 11:17:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:24:03.018 00:24:03.018 real 0m3.728s 00:24:03.018 user 0m5.142s 00:24:03.018 sys 0m0.480s 00:24:03.018 11:17:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:03.018 11:17:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.018 11:17:21 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:24:03.018 11:17:21 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:24:03.018 11:17:21 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:24:03.018 11:17:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:03.018 11:17:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:03.018 11:17:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:03.018 ************************************ 00:24:03.018 START TEST raid_state_function_test 00:24:03.018 ************************************ 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:03.018 Process raid pid: 53120 00:24:03.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=53120 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53120' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 53120 /var/tmp/spdk-raid.sock 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 53120 ']' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.018 11:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.018 [2024-05-15 11:17:21.580128] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:24:03.018 [2024-05-15 11:17:21.580329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.276 [2024-05-15 11:17:21.751448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.533 [2024-05-15 11:17:21.974919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.791 [2024-05-15 11:17:22.176772] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.791 11:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.791 11:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:24:03.791 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:04.049 [2024-05-15 11:17:22.593441] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:04.049 [2024-05-15 11:17:22.593539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:04.049 [2024-05-15 11:17:22.593567] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:04.049 [2024-05-15 11:17:22.593593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.049 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.307 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.307 "name": "Existed_Raid", 00:24:04.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.307 "strip_size_kb": 64, 00:24:04.307 "state": "configuring", 00:24:04.307 "raid_level": "raid0", 00:24:04.307 "superblock": false, 00:24:04.307 "num_base_bdevs": 2, 00:24:04.307 "num_base_bdevs_discovered": 0, 00:24:04.307 "num_base_bdevs_operational": 2, 00:24:04.307 "base_bdevs_list": [ 00:24:04.307 { 00:24:04.307 "name": "BaseBdev1", 00:24:04.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.307 "is_configured": false, 00:24:04.307 "data_offset": 0, 00:24:04.307 "data_size": 0 00:24:04.307 }, 00:24:04.307 { 00:24:04.307 "name": "BaseBdev2", 00:24:04.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.307 "is_configured": false, 00:24:04.307 "data_offset": 0, 00:24:04.307 "data_size": 0 00:24:04.307 } 00:24:04.307 ] 00:24:04.307 }' 00:24:04.307 11:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.307 11:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.874 11:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:05.132 [2024-05-15 11:17:23.637536] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:05.132 [2024-05-15 11:17:23.637586] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:24:05.132 11:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:05.390 [2024-05-15 11:17:23.829547] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:05.390 [2024-05-15 11:17:23.829639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:05.390 [2024-05-15 11:17:23.829656] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:05.390 [2024-05-15 11:17:23.829683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:05.390 11:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:05.649 [2024-05-15 11:17:24.118294] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.649 BaseBdev1 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:05.649 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:05.908 [ 00:24:05.908 { 00:24:05.908 "name": "BaseBdev1", 00:24:05.908 "aliases": [ 00:24:05.908 "aa05900d-0e56-428f-879e-a566dd48db84" 00:24:05.908 ], 00:24:05.908 "product_name": "Malloc disk", 00:24:05.908 "block_size": 512, 00:24:05.908 "num_blocks": 65536, 00:24:05.908 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:05.908 "assigned_rate_limits": { 00:24:05.908 "rw_ios_per_sec": 0, 00:24:05.908 "rw_mbytes_per_sec": 0, 00:24:05.908 "r_mbytes_per_sec": 0, 00:24:05.908 "w_mbytes_per_sec": 0 00:24:05.908 }, 00:24:05.908 "claimed": true, 00:24:05.908 "claim_type": "exclusive_write", 00:24:05.908 "zoned": false, 00:24:05.908 "supported_io_types": { 00:24:05.908 "read": true, 00:24:05.908 "write": true, 00:24:05.908 "unmap": true, 00:24:05.908 "write_zeroes": true, 00:24:05.908 "flush": true, 00:24:05.908 "reset": true, 00:24:05.908 "compare": false, 00:24:05.908 "compare_and_write": false, 00:24:05.908 "abort": true, 00:24:05.908 "nvme_admin": false, 00:24:05.908 "nvme_io": false 00:24:05.908 }, 00:24:05.908 "memory_domains": [ 00:24:05.908 { 00:24:05.908 "dma_device_id": "system", 00:24:05.908 "dma_device_type": 1 00:24:05.908 }, 00:24:05.908 { 00:24:05.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.908 "dma_device_type": 2 00:24:05.908 } 00:24:05.908 ], 00:24:05.908 "driver_specific": {} 00:24:05.908 } 00:24:05.908 ] 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:05.908 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.909 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.909 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.909 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.909 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.909 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.167 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.167 "name": "Existed_Raid", 00:24:06.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.167 "strip_size_kb": 64, 00:24:06.167 "state": "configuring", 00:24:06.167 "raid_level": "raid0", 00:24:06.167 "superblock": false, 00:24:06.167 "num_base_bdevs": 2, 00:24:06.167 "num_base_bdevs_discovered": 1, 00:24:06.167 "num_base_bdevs_operational": 2, 00:24:06.167 "base_bdevs_list": [ 00:24:06.167 { 00:24:06.167 "name": "BaseBdev1", 00:24:06.167 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:06.167 "is_configured": true, 00:24:06.167 "data_offset": 0, 00:24:06.167 "data_size": 65536 00:24:06.167 }, 00:24:06.167 { 00:24:06.167 "name": "BaseBdev2", 00:24:06.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.167 "is_configured": false, 00:24:06.167 "data_offset": 0, 00:24:06.167 "data_size": 0 00:24:06.167 } 00:24:06.167 ] 00:24:06.167 }' 00:24:06.167 11:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.167 11:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.102 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:07.102 [2024-05-15 11:17:25.626663] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:07.102 [2024-05-15 11:17:25.626760] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:24:07.102 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:07.361 [2024-05-15 11:17:25.826798] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:07.361 [2024-05-15 11:17:25.828457] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:07.361 [2024-05-15 11:17:25.828518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.361 11:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.619 11:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.619 "name": "Existed_Raid", 00:24:07.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.619 "strip_size_kb": 64, 00:24:07.619 "state": "configuring", 00:24:07.619 "raid_level": "raid0", 00:24:07.619 "superblock": false, 00:24:07.619 "num_base_bdevs": 2, 00:24:07.619 "num_base_bdevs_discovered": 1, 00:24:07.619 "num_base_bdevs_operational": 2, 00:24:07.619 "base_bdevs_list": [ 00:24:07.619 { 00:24:07.619 "name": "BaseBdev1", 00:24:07.619 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:07.619 "is_configured": true, 00:24:07.619 "data_offset": 0, 00:24:07.619 "data_size": 65536 00:24:07.619 }, 00:24:07.619 { 00:24:07.619 "name": "BaseBdev2", 00:24:07.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.619 "is_configured": false, 00:24:07.619 "data_offset": 0, 00:24:07.619 "data_size": 0 00:24:07.619 } 00:24:07.619 ] 00:24:07.619 }' 00:24:07.619 11:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.619 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.184 11:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:08.442 [2024-05-15 11:17:26.973819] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.442 [2024-05-15 11:17:26.973872] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:24:08.442 BaseBdev2 00:24:08.442 [2024-05-15 11:17:26.974127] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:08.442 [2024-05-15 11:17:26.974459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:24:08.442 [2024-05-15 11:17:26.975003] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:24:08.442 [2024-05-15 11:17:26.975039] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:24:08.442 [2024-05-15 11:17:26.975538] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:08.442 11:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:08.700 11:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:08.959 [ 00:24:08.959 { 00:24:08.959 "name": "BaseBdev2", 00:24:08.959 "aliases": [ 00:24:08.959 "084aed60-cabb-4522-b731-f52b6b45ce14" 00:24:08.959 ], 00:24:08.959 "product_name": "Malloc disk", 00:24:08.959 "block_size": 512, 00:24:08.959 "num_blocks": 65536, 00:24:08.959 "uuid": "084aed60-cabb-4522-b731-f52b6b45ce14", 00:24:08.959 "assigned_rate_limits": { 00:24:08.959 "rw_ios_per_sec": 0, 00:24:08.959 "rw_mbytes_per_sec": 0, 00:24:08.959 "r_mbytes_per_sec": 0, 00:24:08.959 "w_mbytes_per_sec": 0 00:24:08.959 }, 00:24:08.959 "claimed": true, 00:24:08.959 "claim_type": "exclusive_write", 00:24:08.959 "zoned": false, 00:24:08.959 "supported_io_types": { 00:24:08.959 "read": true, 00:24:08.959 "write": true, 00:24:08.959 "unmap": true, 00:24:08.959 "write_zeroes": true, 00:24:08.959 "flush": true, 00:24:08.959 "reset": true, 00:24:08.959 "compare": false, 00:24:08.959 "compare_and_write": false, 00:24:08.959 "abort": true, 00:24:08.959 "nvme_admin": false, 00:24:08.959 "nvme_io": false 00:24:08.959 }, 00:24:08.959 "memory_domains": [ 00:24:08.959 { 00:24:08.960 "dma_device_id": "system", 00:24:08.960 "dma_device_type": 1 00:24:08.960 }, 00:24:08.960 { 00:24:08.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.960 "dma_device_type": 2 00:24:08.960 } 00:24:08.960 ], 00:24:08.960 "driver_specific": {} 00:24:08.960 } 00:24:08.960 ] 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.960 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.218 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.218 "name": "Existed_Raid", 00:24:09.218 "uuid": "6c828509-4483-4a45-a3bb-20939abafc96", 00:24:09.218 "strip_size_kb": 64, 00:24:09.218 "state": "online", 00:24:09.218 "raid_level": "raid0", 00:24:09.218 "superblock": false, 00:24:09.218 "num_base_bdevs": 2, 00:24:09.218 "num_base_bdevs_discovered": 2, 00:24:09.218 "num_base_bdevs_operational": 2, 00:24:09.218 "base_bdevs_list": [ 00:24:09.218 { 00:24:09.218 "name": "BaseBdev1", 00:24:09.218 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:09.218 "is_configured": true, 00:24:09.218 "data_offset": 0, 00:24:09.218 "data_size": 65536 00:24:09.218 }, 00:24:09.218 { 00:24:09.218 "name": "BaseBdev2", 00:24:09.218 "uuid": "084aed60-cabb-4522-b731-f52b6b45ce14", 00:24:09.218 "is_configured": true, 00:24:09.218 "data_offset": 0, 00:24:09.218 "data_size": 65536 00:24:09.218 } 00:24:09.218 ] 00:24:09.218 }' 00:24:09.218 11:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.218 11:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:09.785 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:10.043 [2024-05-15 11:17:28.562376] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.043 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:10.043 "name": "Existed_Raid", 00:24:10.043 "aliases": [ 00:24:10.043 "6c828509-4483-4a45-a3bb-20939abafc96" 00:24:10.043 ], 00:24:10.043 "product_name": "Raid Volume", 00:24:10.043 "block_size": 512, 00:24:10.043 "num_blocks": 131072, 00:24:10.043 "uuid": "6c828509-4483-4a45-a3bb-20939abafc96", 00:24:10.043 "assigned_rate_limits": { 00:24:10.043 "rw_ios_per_sec": 0, 00:24:10.043 "rw_mbytes_per_sec": 0, 00:24:10.043 "r_mbytes_per_sec": 0, 00:24:10.043 "w_mbytes_per_sec": 0 00:24:10.043 }, 00:24:10.043 "claimed": false, 00:24:10.043 "zoned": false, 00:24:10.043 "supported_io_types": { 00:24:10.043 "read": true, 00:24:10.043 "write": true, 00:24:10.043 "unmap": true, 00:24:10.043 "write_zeroes": true, 00:24:10.043 "flush": true, 00:24:10.043 "reset": true, 00:24:10.043 "compare": false, 00:24:10.043 "compare_and_write": false, 00:24:10.043 "abort": false, 00:24:10.043 "nvme_admin": false, 00:24:10.043 "nvme_io": false 00:24:10.043 }, 00:24:10.043 "memory_domains": [ 00:24:10.043 { 00:24:10.043 "dma_device_id": "system", 00:24:10.043 "dma_device_type": 1 00:24:10.043 }, 00:24:10.043 { 00:24:10.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.044 "dma_device_type": 2 00:24:10.044 }, 00:24:10.044 { 00:24:10.044 "dma_device_id": "system", 00:24:10.044 "dma_device_type": 1 00:24:10.044 }, 00:24:10.044 { 00:24:10.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.044 "dma_device_type": 2 00:24:10.044 } 00:24:10.044 ], 00:24:10.044 "driver_specific": { 00:24:10.044 "raid": { 00:24:10.044 "uuid": "6c828509-4483-4a45-a3bb-20939abafc96", 00:24:10.044 "strip_size_kb": 64, 00:24:10.044 "state": "online", 00:24:10.044 "raid_level": "raid0", 00:24:10.044 "superblock": false, 00:24:10.044 "num_base_bdevs": 2, 00:24:10.044 "num_base_bdevs_discovered": 2, 00:24:10.044 "num_base_bdevs_operational": 2, 00:24:10.044 "base_bdevs_list": [ 00:24:10.044 { 00:24:10.044 "name": "BaseBdev1", 00:24:10.044 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:10.044 "is_configured": true, 00:24:10.044 "data_offset": 0, 00:24:10.044 "data_size": 65536 00:24:10.044 }, 00:24:10.044 { 00:24:10.044 "name": "BaseBdev2", 00:24:10.044 "uuid": "084aed60-cabb-4522-b731-f52b6b45ce14", 00:24:10.044 "is_configured": true, 00:24:10.044 "data_offset": 0, 00:24:10.044 "data_size": 65536 00:24:10.044 } 00:24:10.044 ] 00:24:10.044 } 00:24:10.044 } 00:24:10.044 }' 00:24:10.044 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:10.044 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:10.044 BaseBdev2' 00:24:10.044 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:10.044 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:10.044 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:10.301 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:10.301 "name": "BaseBdev1", 00:24:10.301 "aliases": [ 00:24:10.301 "aa05900d-0e56-428f-879e-a566dd48db84" 00:24:10.301 ], 00:24:10.301 "product_name": "Malloc disk", 00:24:10.301 "block_size": 512, 00:24:10.301 "num_blocks": 65536, 00:24:10.301 "uuid": "aa05900d-0e56-428f-879e-a566dd48db84", 00:24:10.301 "assigned_rate_limits": { 00:24:10.301 "rw_ios_per_sec": 0, 00:24:10.301 "rw_mbytes_per_sec": 0, 00:24:10.301 "r_mbytes_per_sec": 0, 00:24:10.301 "w_mbytes_per_sec": 0 00:24:10.301 }, 00:24:10.301 "claimed": true, 00:24:10.301 "claim_type": "exclusive_write", 00:24:10.301 "zoned": false, 00:24:10.301 "supported_io_types": { 00:24:10.301 "read": true, 00:24:10.301 "write": true, 00:24:10.301 "unmap": true, 00:24:10.301 "write_zeroes": true, 00:24:10.301 "flush": true, 00:24:10.301 "reset": true, 00:24:10.301 "compare": false, 00:24:10.301 "compare_and_write": false, 00:24:10.301 "abort": true, 00:24:10.301 "nvme_admin": false, 00:24:10.301 "nvme_io": false 00:24:10.301 }, 00:24:10.301 "memory_domains": [ 00:24:10.301 { 00:24:10.301 "dma_device_id": "system", 00:24:10.301 "dma_device_type": 1 00:24:10.301 }, 00:24:10.301 { 00:24:10.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.301 "dma_device_type": 2 00:24:10.301 } 00:24:10.301 ], 00:24:10.301 "driver_specific": {} 00:24:10.301 }' 00:24:10.301 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:10.301 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:10.559 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:10.559 11:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:10.559 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:10.559 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:10.559 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:10.559 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:10.817 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:11.074 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:11.074 "name": "BaseBdev2", 00:24:11.074 "aliases": [ 00:24:11.074 "084aed60-cabb-4522-b731-f52b6b45ce14" 00:24:11.074 ], 00:24:11.074 "product_name": "Malloc disk", 00:24:11.074 "block_size": 512, 00:24:11.074 "num_blocks": 65536, 00:24:11.074 "uuid": "084aed60-cabb-4522-b731-f52b6b45ce14", 00:24:11.074 "assigned_rate_limits": { 00:24:11.074 "rw_ios_per_sec": 0, 00:24:11.074 "rw_mbytes_per_sec": 0, 00:24:11.074 "r_mbytes_per_sec": 0, 00:24:11.074 "w_mbytes_per_sec": 0 00:24:11.074 }, 00:24:11.074 "claimed": true, 00:24:11.074 "claim_type": "exclusive_write", 00:24:11.074 "zoned": false, 00:24:11.074 "supported_io_types": { 00:24:11.074 "read": true, 00:24:11.074 "write": true, 00:24:11.074 "unmap": true, 00:24:11.074 "write_zeroes": true, 00:24:11.074 "flush": true, 00:24:11.074 "reset": true, 00:24:11.074 "compare": false, 00:24:11.074 "compare_and_write": false, 00:24:11.074 "abort": true, 00:24:11.074 "nvme_admin": false, 00:24:11.074 "nvme_io": false 00:24:11.074 }, 00:24:11.074 "memory_domains": [ 00:24:11.074 { 00:24:11.074 "dma_device_id": "system", 00:24:11.074 "dma_device_type": 1 00:24:11.074 }, 00:24:11.074 { 00:24:11.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.074 "dma_device_type": 2 00:24:11.074 } 00:24:11.074 ], 00:24:11.074 "driver_specific": {} 00:24:11.074 }' 00:24:11.074 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:11.074 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:11.074 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:11.074 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:11.332 11:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:11.590 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:11.590 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:11.847 [2024-05-15 11:17:30.238671] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:11.847 [2024-05-15 11:17:30.238712] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.848 [2024-05-15 11:17:30.238768] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.848 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.106 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.106 "name": "Existed_Raid", 00:24:12.106 "uuid": "6c828509-4483-4a45-a3bb-20939abafc96", 00:24:12.106 "strip_size_kb": 64, 00:24:12.106 "state": "offline", 00:24:12.106 "raid_level": "raid0", 00:24:12.106 "superblock": false, 00:24:12.106 "num_base_bdevs": 2, 00:24:12.106 "num_base_bdevs_discovered": 1, 00:24:12.106 "num_base_bdevs_operational": 1, 00:24:12.106 "base_bdevs_list": [ 00:24:12.106 { 00:24:12.106 "name": null, 00:24:12.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.106 "is_configured": false, 00:24:12.106 "data_offset": 0, 00:24:12.106 "data_size": 65536 00:24:12.106 }, 00:24:12.106 { 00:24:12.106 "name": "BaseBdev2", 00:24:12.106 "uuid": "084aed60-cabb-4522-b731-f52b6b45ce14", 00:24:12.106 "is_configured": true, 00:24:12.106 "data_offset": 0, 00:24:12.106 "data_size": 65536 00:24:12.106 } 00:24:12.106 ] 00:24:12.106 }' 00:24:12.106 11:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.106 11:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.671 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:12.671 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:12.671 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.671 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:12.929 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:12.929 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:12.929 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:13.187 [2024-05-15 11:17:31.639746] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:13.187 [2024-05-15 11:17:31.639845] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:24:13.187 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:13.187 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:13.187 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.187 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 53120 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 53120 ']' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 53120 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53120 00:24:13.445 killing process with pid 53120 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53120' 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 53120 00:24:13.445 11:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 53120 00:24:13.445 [2024-05-15 11:17:31.983672] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:13.445 [2024-05-15 11:17:31.983782] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:24:14.818 00:24:14.818 real 0m11.807s 00:24:14.818 user 0m20.875s 00:24:14.818 sys 0m1.241s 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:14.818 ************************************ 00:24:14.818 END TEST raid_state_function_test 00:24:14.818 ************************************ 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 11:17:33 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:24:14.818 11:17:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:14.818 11:17:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:14.818 11:17:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 ************************************ 00:24:14.818 START TEST raid_state_function_test_sb 00:24:14.818 ************************************ 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:14.818 Process raid pid: 53510 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=53510 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53510' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 53510 /var/tmp/spdk-raid.sock 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 53510 ']' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:14.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.818 11:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.076 [2024-05-15 11:17:33.455231] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:24:15.076 [2024-05-15 11:17:33.455419] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.076 [2024-05-15 11:17:33.622364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.334 [2024-05-15 11:17:33.871252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.591 [2024-05-15 11:17:34.076666] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:15.849 11:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:15.849 11:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:24:15.849 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:15.849 [2024-05-15 11:17:34.477120] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:15.849 [2024-05-15 11:17:34.477214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:15.849 [2024-05-15 11:17:34.477246] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:15.849 [2024-05-15 11:17:34.477266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.108 "name": "Existed_Raid", 00:24:16.108 "uuid": "7b3e0fec-2477-4365-bd58-142c6f6383f1", 00:24:16.108 "strip_size_kb": 64, 00:24:16.108 "state": "configuring", 00:24:16.108 "raid_level": "raid0", 00:24:16.108 "superblock": true, 00:24:16.108 "num_base_bdevs": 2, 00:24:16.108 "num_base_bdevs_discovered": 0, 00:24:16.108 "num_base_bdevs_operational": 2, 00:24:16.108 "base_bdevs_list": [ 00:24:16.108 { 00:24:16.108 "name": "BaseBdev1", 00:24:16.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.108 "is_configured": false, 00:24:16.108 "data_offset": 0, 00:24:16.108 "data_size": 0 00:24:16.108 }, 00:24:16.108 { 00:24:16.108 "name": "BaseBdev2", 00:24:16.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.108 "is_configured": false, 00:24:16.108 "data_offset": 0, 00:24:16.108 "data_size": 0 00:24:16.108 } 00:24:16.108 ] 00:24:16.108 }' 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.108 11:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.043 11:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:17.043 [2024-05-15 11:17:35.593720] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:17.043 [2024-05-15 11:17:35.593766] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:24:17.043 11:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:17.302 [2024-05-15 11:17:35.805798] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.302 [2024-05-15 11:17:35.805911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.302 [2024-05-15 11:17:35.805928] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:17.302 [2024-05-15 11:17:35.805954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:17.302 11:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:17.560 [2024-05-15 11:17:36.045211] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:17.560 BaseBdev1 00:24:17.560 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:17.560 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:17.560 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:17.560 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:17.560 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:17.561 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:17.561 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.819 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:17.819 [ 00:24:17.819 { 00:24:17.819 "name": "BaseBdev1", 00:24:17.819 "aliases": [ 00:24:17.819 "8fb48b27-5679-4a46-8b6b-d377caa039be" 00:24:17.819 ], 00:24:17.819 "product_name": "Malloc disk", 00:24:17.819 "block_size": 512, 00:24:17.819 "num_blocks": 65536, 00:24:17.819 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:17.819 "assigned_rate_limits": { 00:24:17.819 "rw_ios_per_sec": 0, 00:24:17.819 "rw_mbytes_per_sec": 0, 00:24:17.819 "r_mbytes_per_sec": 0, 00:24:17.819 "w_mbytes_per_sec": 0 00:24:17.819 }, 00:24:17.819 "claimed": true, 00:24:17.819 "claim_type": "exclusive_write", 00:24:17.819 "zoned": false, 00:24:17.819 "supported_io_types": { 00:24:17.819 "read": true, 00:24:17.819 "write": true, 00:24:17.819 "unmap": true, 00:24:17.819 "write_zeroes": true, 00:24:17.819 "flush": true, 00:24:17.819 "reset": true, 00:24:17.819 "compare": false, 00:24:17.819 "compare_and_write": false, 00:24:17.819 "abort": true, 00:24:17.819 "nvme_admin": false, 00:24:17.819 "nvme_io": false 00:24:17.819 }, 00:24:17.819 "memory_domains": [ 00:24:17.819 { 00:24:17.819 "dma_device_id": "system", 00:24:17.819 "dma_device_type": 1 00:24:17.819 }, 00:24:17.819 { 00:24:17.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.819 "dma_device_type": 2 00:24:17.819 } 00:24:17.819 ], 00:24:17.819 "driver_specific": {} 00:24:17.819 } 00:24:17.819 ] 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.077 "name": "Existed_Raid", 00:24:18.077 "uuid": "71716dc8-81ed-4f3e-9186-45ee2175e077", 00:24:18.077 "strip_size_kb": 64, 00:24:18.077 "state": "configuring", 00:24:18.077 "raid_level": "raid0", 00:24:18.077 "superblock": true, 00:24:18.077 "num_base_bdevs": 2, 00:24:18.077 "num_base_bdevs_discovered": 1, 00:24:18.077 "num_base_bdevs_operational": 2, 00:24:18.077 "base_bdevs_list": [ 00:24:18.077 { 00:24:18.077 "name": "BaseBdev1", 00:24:18.077 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:18.077 "is_configured": true, 00:24:18.077 "data_offset": 2048, 00:24:18.077 "data_size": 63488 00:24:18.077 }, 00:24:18.077 { 00:24:18.077 "name": "BaseBdev2", 00:24:18.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.077 "is_configured": false, 00:24:18.077 "data_offset": 0, 00:24:18.077 "data_size": 0 00:24:18.077 } 00:24:18.077 ] 00:24:18.077 }' 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.077 11:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.011 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:19.011 [2024-05-15 11:17:37.521550] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:19.011 [2024-05-15 11:17:37.521602] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:24:19.011 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:19.269 [2024-05-15 11:17:37.769640] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.269 [2024-05-15 11:17:37.771228] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.269 [2024-05-15 11:17:37.771287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.269 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.270 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.528 11:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.528 "name": "Existed_Raid", 00:24:19.528 "uuid": "8881c88e-925f-4ea7-8af3-9200082381c8", 00:24:19.528 "strip_size_kb": 64, 00:24:19.528 "state": "configuring", 00:24:19.528 "raid_level": "raid0", 00:24:19.528 "superblock": true, 00:24:19.528 "num_base_bdevs": 2, 00:24:19.528 "num_base_bdevs_discovered": 1, 00:24:19.528 "num_base_bdevs_operational": 2, 00:24:19.528 "base_bdevs_list": [ 00:24:19.528 { 00:24:19.528 "name": "BaseBdev1", 00:24:19.528 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:19.528 "is_configured": true, 00:24:19.528 "data_offset": 2048, 00:24:19.528 "data_size": 63488 00:24:19.528 }, 00:24:19.528 { 00:24:19.528 "name": "BaseBdev2", 00:24:19.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.528 "is_configured": false, 00:24:19.528 "data_offset": 0, 00:24:19.528 "data_size": 0 00:24:19.528 } 00:24:19.528 ] 00:24:19.528 }' 00:24:19.528 11:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.528 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.097 11:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:20.355 [2024-05-15 11:17:38.815567] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.355 [2024-05-15 11:17:38.815746] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:24:20.355 [2024-05-15 11:17:38.815767] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:20.355 BaseBdev2 00:24:20.355 [2024-05-15 11:17:38.817516] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:24:20.355 [2024-05-15 11:17:38.817757] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:24:20.355 [2024-05-15 11:17:38.817773] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:24:20.355 [2024-05-15 11:17:38.817934] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:20.355 11:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:20.614 [ 00:24:20.614 { 00:24:20.614 "name": "BaseBdev2", 00:24:20.614 "aliases": [ 00:24:20.614 "ff040e05-1a01-44f0-8602-39633e535b46" 00:24:20.614 ], 00:24:20.614 "product_name": "Malloc disk", 00:24:20.614 "block_size": 512, 00:24:20.614 "num_blocks": 65536, 00:24:20.614 "uuid": "ff040e05-1a01-44f0-8602-39633e535b46", 00:24:20.614 "assigned_rate_limits": { 00:24:20.614 "rw_ios_per_sec": 0, 00:24:20.614 "rw_mbytes_per_sec": 0, 00:24:20.614 "r_mbytes_per_sec": 0, 00:24:20.614 "w_mbytes_per_sec": 0 00:24:20.614 }, 00:24:20.614 "claimed": true, 00:24:20.614 "claim_type": "exclusive_write", 00:24:20.614 "zoned": false, 00:24:20.614 "supported_io_types": { 00:24:20.614 "read": true, 00:24:20.614 "write": true, 00:24:20.614 "unmap": true, 00:24:20.614 "write_zeroes": true, 00:24:20.614 "flush": true, 00:24:20.614 "reset": true, 00:24:20.614 "compare": false, 00:24:20.614 "compare_and_write": false, 00:24:20.614 "abort": true, 00:24:20.614 "nvme_admin": false, 00:24:20.614 "nvme_io": false 00:24:20.614 }, 00:24:20.614 "memory_domains": [ 00:24:20.614 { 00:24:20.614 "dma_device_id": "system", 00:24:20.614 "dma_device_type": 1 00:24:20.614 }, 00:24:20.614 { 00:24:20.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.614 "dma_device_type": 2 00:24:20.614 } 00:24:20.614 ], 00:24:20.614 "driver_specific": {} 00:24:20.614 } 00:24:20.614 ] 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.614 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.873 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.873 "name": "Existed_Raid", 00:24:20.873 "uuid": "8881c88e-925f-4ea7-8af3-9200082381c8", 00:24:20.873 "strip_size_kb": 64, 00:24:20.873 "state": "online", 00:24:20.873 "raid_level": "raid0", 00:24:20.873 "superblock": true, 00:24:20.873 "num_base_bdevs": 2, 00:24:20.873 "num_base_bdevs_discovered": 2, 00:24:20.873 "num_base_bdevs_operational": 2, 00:24:20.873 "base_bdevs_list": [ 00:24:20.873 { 00:24:20.873 "name": "BaseBdev1", 00:24:20.873 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:20.873 "is_configured": true, 00:24:20.873 "data_offset": 2048, 00:24:20.873 "data_size": 63488 00:24:20.873 }, 00:24:20.873 { 00:24:20.873 "name": "BaseBdev2", 00:24:20.873 "uuid": "ff040e05-1a01-44f0-8602-39633e535b46", 00:24:20.873 "is_configured": true, 00:24:20.873 "data_offset": 2048, 00:24:20.873 "data_size": 63488 00:24:20.873 } 00:24:20.873 ] 00:24:20.873 }' 00:24:20.873 11:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.873 11:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:21.805 [2024-05-15 11:17:40.384081] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:21.805 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:21.805 "name": "Existed_Raid", 00:24:21.805 "aliases": [ 00:24:21.805 "8881c88e-925f-4ea7-8af3-9200082381c8" 00:24:21.805 ], 00:24:21.805 "product_name": "Raid Volume", 00:24:21.805 "block_size": 512, 00:24:21.805 "num_blocks": 126976, 00:24:21.805 "uuid": "8881c88e-925f-4ea7-8af3-9200082381c8", 00:24:21.805 "assigned_rate_limits": { 00:24:21.805 "rw_ios_per_sec": 0, 00:24:21.805 "rw_mbytes_per_sec": 0, 00:24:21.805 "r_mbytes_per_sec": 0, 00:24:21.805 "w_mbytes_per_sec": 0 00:24:21.805 }, 00:24:21.805 "claimed": false, 00:24:21.805 "zoned": false, 00:24:21.805 "supported_io_types": { 00:24:21.805 "read": true, 00:24:21.805 "write": true, 00:24:21.805 "unmap": true, 00:24:21.805 "write_zeroes": true, 00:24:21.805 "flush": true, 00:24:21.805 "reset": true, 00:24:21.805 "compare": false, 00:24:21.805 "compare_and_write": false, 00:24:21.805 "abort": false, 00:24:21.805 "nvme_admin": false, 00:24:21.805 "nvme_io": false 00:24:21.805 }, 00:24:21.805 "memory_domains": [ 00:24:21.805 { 00:24:21.805 "dma_device_id": "system", 00:24:21.805 "dma_device_type": 1 00:24:21.805 }, 00:24:21.805 { 00:24:21.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.806 "dma_device_type": 2 00:24:21.806 }, 00:24:21.806 { 00:24:21.806 "dma_device_id": "system", 00:24:21.806 "dma_device_type": 1 00:24:21.806 }, 00:24:21.806 { 00:24:21.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.806 "dma_device_type": 2 00:24:21.806 } 00:24:21.806 ], 00:24:21.806 "driver_specific": { 00:24:21.806 "raid": { 00:24:21.806 "uuid": "8881c88e-925f-4ea7-8af3-9200082381c8", 00:24:21.806 "strip_size_kb": 64, 00:24:21.806 "state": "online", 00:24:21.806 "raid_level": "raid0", 00:24:21.806 "superblock": true, 00:24:21.806 "num_base_bdevs": 2, 00:24:21.806 "num_base_bdevs_discovered": 2, 00:24:21.806 "num_base_bdevs_operational": 2, 00:24:21.806 "base_bdevs_list": [ 00:24:21.806 { 00:24:21.806 "name": "BaseBdev1", 00:24:21.806 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:21.806 "is_configured": true, 00:24:21.806 "data_offset": 2048, 00:24:21.806 "data_size": 63488 00:24:21.806 }, 00:24:21.806 { 00:24:21.806 "name": "BaseBdev2", 00:24:21.806 "uuid": "ff040e05-1a01-44f0-8602-39633e535b46", 00:24:21.806 "is_configured": true, 00:24:21.806 "data_offset": 2048, 00:24:21.806 "data_size": 63488 00:24:21.806 } 00:24:21.806 ] 00:24:21.806 } 00:24:21.806 } 00:24:21.806 }' 00:24:21.806 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.064 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:22.064 BaseBdev2' 00:24:22.064 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:22.064 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:22.064 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:22.322 "name": "BaseBdev1", 00:24:22.322 "aliases": [ 00:24:22.322 "8fb48b27-5679-4a46-8b6b-d377caa039be" 00:24:22.322 ], 00:24:22.322 "product_name": "Malloc disk", 00:24:22.322 "block_size": 512, 00:24:22.322 "num_blocks": 65536, 00:24:22.322 "uuid": "8fb48b27-5679-4a46-8b6b-d377caa039be", 00:24:22.322 "assigned_rate_limits": { 00:24:22.322 "rw_ios_per_sec": 0, 00:24:22.322 "rw_mbytes_per_sec": 0, 00:24:22.322 "r_mbytes_per_sec": 0, 00:24:22.322 "w_mbytes_per_sec": 0 00:24:22.322 }, 00:24:22.322 "claimed": true, 00:24:22.322 "claim_type": "exclusive_write", 00:24:22.322 "zoned": false, 00:24:22.322 "supported_io_types": { 00:24:22.322 "read": true, 00:24:22.322 "write": true, 00:24:22.322 "unmap": true, 00:24:22.322 "write_zeroes": true, 00:24:22.322 "flush": true, 00:24:22.322 "reset": true, 00:24:22.322 "compare": false, 00:24:22.322 "compare_and_write": false, 00:24:22.322 "abort": true, 00:24:22.322 "nvme_admin": false, 00:24:22.322 "nvme_io": false 00:24:22.322 }, 00:24:22.322 "memory_domains": [ 00:24:22.322 { 00:24:22.322 "dma_device_id": "system", 00:24:22.322 "dma_device_type": 1 00:24:22.322 }, 00:24:22.322 { 00:24:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.322 "dma_device_type": 2 00:24:22.322 } 00:24:22.322 ], 00:24:22.322 "driver_specific": {} 00:24:22.322 }' 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.322 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.579 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.579 11:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.579 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.579 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.579 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.579 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:22.837 "name": "BaseBdev2", 00:24:22.837 "aliases": [ 00:24:22.837 "ff040e05-1a01-44f0-8602-39633e535b46" 00:24:22.837 ], 00:24:22.837 "product_name": "Malloc disk", 00:24:22.837 "block_size": 512, 00:24:22.837 "num_blocks": 65536, 00:24:22.837 "uuid": "ff040e05-1a01-44f0-8602-39633e535b46", 00:24:22.837 "assigned_rate_limits": { 00:24:22.837 "rw_ios_per_sec": 0, 00:24:22.837 "rw_mbytes_per_sec": 0, 00:24:22.837 "r_mbytes_per_sec": 0, 00:24:22.837 "w_mbytes_per_sec": 0 00:24:22.837 }, 00:24:22.837 "claimed": true, 00:24:22.837 "claim_type": "exclusive_write", 00:24:22.837 "zoned": false, 00:24:22.837 "supported_io_types": { 00:24:22.837 "read": true, 00:24:22.837 "write": true, 00:24:22.837 "unmap": true, 00:24:22.837 "write_zeroes": true, 00:24:22.837 "flush": true, 00:24:22.837 "reset": true, 00:24:22.837 "compare": false, 00:24:22.837 "compare_and_write": false, 00:24:22.837 "abort": true, 00:24:22.837 "nvme_admin": false, 00:24:22.837 "nvme_io": false 00:24:22.837 }, 00:24:22.837 "memory_domains": [ 00:24:22.837 { 00:24:22.837 "dma_device_id": "system", 00:24:22.837 "dma_device_type": 1 00:24:22.837 }, 00:24:22.837 { 00:24:22.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.837 "dma_device_type": 2 00:24:22.837 } 00:24:22.837 ], 00:24:22.837 "driver_specific": {} 00:24:22.837 }' 00:24:22.837 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:23.095 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:23.353 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:23.353 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.353 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.353 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:23.353 11:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:23.611 [2024-05-15 11:17:42.076284] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.611 [2024-05-15 11:17:42.076320] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:23.611 [2024-05-15 11:17:42.076374] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.611 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.869 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.869 "name": "Existed_Raid", 00:24:23.869 "uuid": "8881c88e-925f-4ea7-8af3-9200082381c8", 00:24:23.869 "strip_size_kb": 64, 00:24:23.869 "state": "offline", 00:24:23.869 "raid_level": "raid0", 00:24:23.869 "superblock": true, 00:24:23.869 "num_base_bdevs": 2, 00:24:23.869 "num_base_bdevs_discovered": 1, 00:24:23.869 "num_base_bdevs_operational": 1, 00:24:23.869 "base_bdevs_list": [ 00:24:23.869 { 00:24:23.869 "name": null, 00:24:23.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.869 "is_configured": false, 00:24:23.869 "data_offset": 2048, 00:24:23.869 "data_size": 63488 00:24:23.869 }, 00:24:23.869 { 00:24:23.869 "name": "BaseBdev2", 00:24:23.869 "uuid": "ff040e05-1a01-44f0-8602-39633e535b46", 00:24:23.869 "is_configured": true, 00:24:23.869 "data_offset": 2048, 00:24:23.869 "data_size": 63488 00:24:23.869 } 00:24:23.869 ] 00:24:23.869 }' 00:24:23.869 11:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.869 11:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:24.803 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:25.061 [2024-05-15 11:17:43.602904] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:25.061 [2024-05-15 11:17:43.602974] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 53510 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 53510 ']' 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 53510 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.320 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53510 00:24:25.591 killing process with pid 53510 00:24:25.591 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:25.591 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:25.591 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53510' 00:24:25.591 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 53510 00:24:25.591 11:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 53510 00:24:25.591 [2024-05-15 11:17:43.970519] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.591 [2024-05-15 11:17:43.970661] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:26.991 ************************************ 00:24:26.991 END TEST raid_state_function_test_sb 00:24:26.991 ************************************ 00:24:26.991 11:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:24:26.991 00:24:26.991 real 0m11.991s 00:24:26.991 user 0m21.113s 00:24:26.991 sys 0m1.308s 00:24:26.991 11:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:26.991 11:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.991 11:17:45 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:24:26.991 11:17:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:26.991 11:17:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:26.991 11:17:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.991 ************************************ 00:24:26.991 START TEST raid_superblock_test 00:24:26.991 ************************************ 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:26.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=53899 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 53899 /var/tmp/spdk-raid.sock 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 53899 ']' 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.991 11:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.991 [2024-05-15 11:17:45.485901] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:24:26.991 [2024-05-15 11:17:45.486086] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53899 ] 00:24:27.250 [2024-05-15 11:17:45.640938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.250 [2024-05-15 11:17:45.855763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.509 [2024-05-15 11:17:46.064841] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:27.768 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:28.027 malloc1 00:24:28.027 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:28.285 [2024-05-15 11:17:46.733045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:28.285 [2024-05-15 11:17:46.733198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.285 [2024-05-15 11:17:46.733270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:24:28.285 [2024-05-15 11:17:46.733311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.285 [2024-05-15 11:17:46.735228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.285 [2024-05-15 11:17:46.735269] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:28.285 pt1 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:28.285 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:28.543 malloc2 00:24:28.543 11:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:28.543 [2024-05-15 11:17:47.148562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:28.543 [2024-05-15 11:17:47.148662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.543 [2024-05-15 11:17:47.148726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:24:28.543 [2024-05-15 11:17:47.148774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.543 [2024-05-15 11:17:47.150771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.543 [2024-05-15 11:17:47.150850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:28.543 pt2 00:24:28.543 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:28.543 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:28.543 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:24:28.802 [2024-05-15 11:17:47.340684] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:28.802 [2024-05-15 11:17:47.342450] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:28.802 [2024-05-15 11:17:47.342594] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:24:28.802 [2024-05-15 11:17:47.342610] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:28.802 [2024-05-15 11:17:47.342746] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:28.802 [2024-05-15 11:17:47.343054] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:24:28.802 [2024-05-15 11:17:47.343073] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:24:28.802 [2024-05-15 11:17:47.343202] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.802 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.060 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:29.060 "name": "raid_bdev1", 00:24:29.060 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:29.060 "strip_size_kb": 64, 00:24:29.060 "state": "online", 00:24:29.060 "raid_level": "raid0", 00:24:29.060 "superblock": true, 00:24:29.060 "num_base_bdevs": 2, 00:24:29.060 "num_base_bdevs_discovered": 2, 00:24:29.060 "num_base_bdevs_operational": 2, 00:24:29.060 "base_bdevs_list": [ 00:24:29.060 { 00:24:29.060 "name": "pt1", 00:24:29.060 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:29.060 "is_configured": true, 00:24:29.060 "data_offset": 2048, 00:24:29.060 "data_size": 63488 00:24:29.060 }, 00:24:29.060 { 00:24:29.060 "name": "pt2", 00:24:29.060 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:29.060 "is_configured": true, 00:24:29.060 "data_offset": 2048, 00:24:29.060 "data_size": 63488 00:24:29.060 } 00:24:29.060 ] 00:24:29.060 }' 00:24:29.060 11:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:29.060 11:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.626 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:29.626 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:29.626 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:29.626 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:29.627 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:29.627 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:29.627 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:29.627 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:29.885 [2024-05-15 11:17:48.492975] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.885 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:29.885 "name": "raid_bdev1", 00:24:29.885 "aliases": [ 00:24:29.885 "f52cb19d-f34a-4669-a08d-eeacfe885a7d" 00:24:29.885 ], 00:24:29.885 "product_name": "Raid Volume", 00:24:29.885 "block_size": 512, 00:24:29.885 "num_blocks": 126976, 00:24:29.885 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:29.885 "assigned_rate_limits": { 00:24:29.885 "rw_ios_per_sec": 0, 00:24:29.885 "rw_mbytes_per_sec": 0, 00:24:29.885 "r_mbytes_per_sec": 0, 00:24:29.885 "w_mbytes_per_sec": 0 00:24:29.885 }, 00:24:29.885 "claimed": false, 00:24:29.885 "zoned": false, 00:24:29.885 "supported_io_types": { 00:24:29.885 "read": true, 00:24:29.885 "write": true, 00:24:29.885 "unmap": true, 00:24:29.885 "write_zeroes": true, 00:24:29.885 "flush": true, 00:24:29.885 "reset": true, 00:24:29.885 "compare": false, 00:24:29.885 "compare_and_write": false, 00:24:29.885 "abort": false, 00:24:29.885 "nvme_admin": false, 00:24:29.885 "nvme_io": false 00:24:29.885 }, 00:24:29.885 "memory_domains": [ 00:24:29.885 { 00:24:29.885 "dma_device_id": "system", 00:24:29.885 "dma_device_type": 1 00:24:29.885 }, 00:24:29.885 { 00:24:29.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.885 "dma_device_type": 2 00:24:29.885 }, 00:24:29.885 { 00:24:29.885 "dma_device_id": "system", 00:24:29.885 "dma_device_type": 1 00:24:29.885 }, 00:24:29.885 { 00:24:29.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.885 "dma_device_type": 2 00:24:29.885 } 00:24:29.885 ], 00:24:29.885 "driver_specific": { 00:24:29.885 "raid": { 00:24:29.885 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:29.885 "strip_size_kb": 64, 00:24:29.885 "state": "online", 00:24:29.885 "raid_level": "raid0", 00:24:29.885 "superblock": true, 00:24:29.885 "num_base_bdevs": 2, 00:24:29.885 "num_base_bdevs_discovered": 2, 00:24:29.885 "num_base_bdevs_operational": 2, 00:24:29.885 "base_bdevs_list": [ 00:24:29.885 { 00:24:29.885 "name": "pt1", 00:24:29.885 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:29.885 "is_configured": true, 00:24:29.885 "data_offset": 2048, 00:24:29.886 "data_size": 63488 00:24:29.886 }, 00:24:29.886 { 00:24:29.886 "name": "pt2", 00:24:29.886 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:29.886 "is_configured": true, 00:24:29.886 "data_offset": 2048, 00:24:29.886 "data_size": 63488 00:24:29.886 } 00:24:29.886 ] 00:24:29.886 } 00:24:29.886 } 00:24:29.886 }' 00:24:29.886 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:30.144 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:30.144 pt2' 00:24:30.144 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:30.144 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:30.144 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:30.402 "name": "pt1", 00:24:30.402 "aliases": [ 00:24:30.402 "d1159b46-20fb-522c-9918-925403750f47" 00:24:30.402 ], 00:24:30.402 "product_name": "passthru", 00:24:30.402 "block_size": 512, 00:24:30.402 "num_blocks": 65536, 00:24:30.402 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:30.402 "assigned_rate_limits": { 00:24:30.402 "rw_ios_per_sec": 0, 00:24:30.402 "rw_mbytes_per_sec": 0, 00:24:30.402 "r_mbytes_per_sec": 0, 00:24:30.402 "w_mbytes_per_sec": 0 00:24:30.402 }, 00:24:30.402 "claimed": true, 00:24:30.402 "claim_type": "exclusive_write", 00:24:30.402 "zoned": false, 00:24:30.402 "supported_io_types": { 00:24:30.402 "read": true, 00:24:30.402 "write": true, 00:24:30.402 "unmap": true, 00:24:30.402 "write_zeroes": true, 00:24:30.402 "flush": true, 00:24:30.402 "reset": true, 00:24:30.402 "compare": false, 00:24:30.402 "compare_and_write": false, 00:24:30.402 "abort": true, 00:24:30.402 "nvme_admin": false, 00:24:30.402 "nvme_io": false 00:24:30.402 }, 00:24:30.402 "memory_domains": [ 00:24:30.402 { 00:24:30.402 "dma_device_id": "system", 00:24:30.402 "dma_device_type": 1 00:24:30.402 }, 00:24:30.402 { 00:24:30.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.402 "dma_device_type": 2 00:24:30.402 } 00:24:30.402 ], 00:24:30.402 "driver_specific": { 00:24:30.402 "passthru": { 00:24:30.402 "name": "pt1", 00:24:30.402 "base_bdev_name": "malloc1" 00:24:30.402 } 00:24:30.402 } 00:24:30.402 }' 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:30.402 11:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:30.402 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.402 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:30.660 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:30.917 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:30.917 "name": "pt2", 00:24:30.917 "aliases": [ 00:24:30.917 "be691fd0-3bbc-511a-ad37-8212b430feba" 00:24:30.917 ], 00:24:30.917 "product_name": "passthru", 00:24:30.917 "block_size": 512, 00:24:30.917 "num_blocks": 65536, 00:24:30.917 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:30.917 "assigned_rate_limits": { 00:24:30.918 "rw_ios_per_sec": 0, 00:24:30.918 "rw_mbytes_per_sec": 0, 00:24:30.918 "r_mbytes_per_sec": 0, 00:24:30.918 "w_mbytes_per_sec": 0 00:24:30.918 }, 00:24:30.918 "claimed": true, 00:24:30.918 "claim_type": "exclusive_write", 00:24:30.918 "zoned": false, 00:24:30.918 "supported_io_types": { 00:24:30.918 "read": true, 00:24:30.918 "write": true, 00:24:30.918 "unmap": true, 00:24:30.918 "write_zeroes": true, 00:24:30.918 "flush": true, 00:24:30.918 "reset": true, 00:24:30.918 "compare": false, 00:24:30.918 "compare_and_write": false, 00:24:30.918 "abort": true, 00:24:30.918 "nvme_admin": false, 00:24:30.918 "nvme_io": false 00:24:30.918 }, 00:24:30.918 "memory_domains": [ 00:24:30.918 { 00:24:30.918 "dma_device_id": "system", 00:24:30.918 "dma_device_type": 1 00:24:30.918 }, 00:24:30.918 { 00:24:30.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.918 "dma_device_type": 2 00:24:30.918 } 00:24:30.918 ], 00:24:30.918 "driver_specific": { 00:24:30.918 "passthru": { 00:24:30.918 "name": "pt2", 00:24:30.918 "base_bdev_name": "malloc2" 00:24:30.918 } 00:24:30.918 } 00:24:30.918 }' 00:24:30.918 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:30.918 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:31.177 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:31.435 11:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:31.694 [2024-05-15 11:17:50.217160] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.694 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f52cb19d-f34a-4669-a08d-eeacfe885a7d 00:24:31.694 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f52cb19d-f34a-4669-a08d-eeacfe885a7d ']' 00:24:31.694 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:32.020 [2024-05-15 11:17:50.493076] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.020 [2024-05-15 11:17:50.493121] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.020 [2024-05-15 11:17:50.493198] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.020 [2024-05-15 11:17:50.493239] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.020 [2024-05-15 11:17:50.493251] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:24:32.020 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.020 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:32.278 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:32.278 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:32.278 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.278 11:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:32.538 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.538 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:32.797 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:32.797 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:33.055 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:33.315 [2024-05-15 11:17:51.710224] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:33.315 [2024-05-15 11:17:51.711943] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:33.315 [2024-05-15 11:17:51.712006] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:33.315 [2024-05-15 11:17:51.712075] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:33.315 [2024-05-15 11:17:51.712114] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.315 [2024-05-15 11:17:51.712127] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:24:33.315 request: 00:24:33.315 { 00:24:33.315 "name": "raid_bdev1", 00:24:33.315 "raid_level": "raid0", 00:24:33.315 "base_bdevs": [ 00:24:33.315 "malloc1", 00:24:33.315 "malloc2" 00:24:33.315 ], 00:24:33.315 "strip_size_kb": 64, 00:24:33.315 "superblock": false, 00:24:33.315 "method": "bdev_raid_create", 00:24:33.315 "req_id": 1 00:24:33.315 } 00:24:33.315 Got JSON-RPC error response 00:24:33.315 response: 00:24:33.315 { 00:24:33.315 "code": -17, 00:24:33.315 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:33.315 } 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:33.315 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.576 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:33.576 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:33.576 11:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:33.834 [2024-05-15 11:17:52.262219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:33.834 [2024-05-15 11:17:52.262345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.834 [2024-05-15 11:17:52.262393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:24:33.834 [2024-05-15 11:17:52.262423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.834 [2024-05-15 11:17:52.264298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.834 [2024-05-15 11:17:52.264354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:33.834 [2024-05-15 11:17:52.264445] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:33.834 [2024-05-15 11:17:52.264510] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:33.834 pt1 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.834 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.092 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.092 "name": "raid_bdev1", 00:24:34.092 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:34.092 "strip_size_kb": 64, 00:24:34.092 "state": "configuring", 00:24:34.092 "raid_level": "raid0", 00:24:34.092 "superblock": true, 00:24:34.092 "num_base_bdevs": 2, 00:24:34.092 "num_base_bdevs_discovered": 1, 00:24:34.092 "num_base_bdevs_operational": 2, 00:24:34.092 "base_bdevs_list": [ 00:24:34.092 { 00:24:34.092 "name": "pt1", 00:24:34.092 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:34.092 "is_configured": true, 00:24:34.092 "data_offset": 2048, 00:24:34.092 "data_size": 63488 00:24:34.092 }, 00:24:34.092 { 00:24:34.092 "name": null, 00:24:34.092 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:34.092 "is_configured": false, 00:24:34.092 "data_offset": 2048, 00:24:34.092 "data_size": 63488 00:24:34.092 } 00:24:34.092 ] 00:24:34.092 }' 00:24:34.092 11:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.092 11:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.659 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:34.659 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:34.659 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.659 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:34.918 [2024-05-15 11:17:53.378380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:34.918 [2024-05-15 11:17:53.378514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.918 [2024-05-15 11:17:53.378570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:24:34.918 [2024-05-15 11:17:53.378600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.918 [2024-05-15 11:17:53.379225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.918 [2024-05-15 11:17:53.379270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:34.918 [2024-05-15 11:17:53.379357] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:34.918 [2024-05-15 11:17:53.379384] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:34.918 [2024-05-15 11:17:53.379472] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:24:34.918 [2024-05-15 11:17:53.379485] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:34.918 [2024-05-15 11:17:53.379569] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:34.918 [2024-05-15 11:17:53.379789] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:24:34.918 [2024-05-15 11:17:53.379804] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:24:34.918 [2024-05-15 11:17:53.379918] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.918 pt2 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.918 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.177 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.177 "name": "raid_bdev1", 00:24:35.177 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:35.177 "strip_size_kb": 64, 00:24:35.177 "state": "online", 00:24:35.177 "raid_level": "raid0", 00:24:35.177 "superblock": true, 00:24:35.177 "num_base_bdevs": 2, 00:24:35.177 "num_base_bdevs_discovered": 2, 00:24:35.177 "num_base_bdevs_operational": 2, 00:24:35.177 "base_bdevs_list": [ 00:24:35.177 { 00:24:35.177 "name": "pt1", 00:24:35.177 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:35.177 "is_configured": true, 00:24:35.177 "data_offset": 2048, 00:24:35.177 "data_size": 63488 00:24:35.177 }, 00:24:35.177 { 00:24:35.177 "name": "pt2", 00:24:35.177 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:35.177 "is_configured": true, 00:24:35.177 "data_offset": 2048, 00:24:35.177 "data_size": 63488 00:24:35.177 } 00:24:35.177 ] 00:24:35.177 }' 00:24:35.177 11:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.177 11:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:35.752 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:36.010 [2024-05-15 11:17:54.490639] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:36.010 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:36.010 "name": "raid_bdev1", 00:24:36.010 "aliases": [ 00:24:36.010 "f52cb19d-f34a-4669-a08d-eeacfe885a7d" 00:24:36.010 ], 00:24:36.010 "product_name": "Raid Volume", 00:24:36.010 "block_size": 512, 00:24:36.010 "num_blocks": 126976, 00:24:36.010 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:36.010 "assigned_rate_limits": { 00:24:36.010 "rw_ios_per_sec": 0, 00:24:36.010 "rw_mbytes_per_sec": 0, 00:24:36.010 "r_mbytes_per_sec": 0, 00:24:36.010 "w_mbytes_per_sec": 0 00:24:36.010 }, 00:24:36.010 "claimed": false, 00:24:36.010 "zoned": false, 00:24:36.010 "supported_io_types": { 00:24:36.010 "read": true, 00:24:36.010 "write": true, 00:24:36.010 "unmap": true, 00:24:36.010 "write_zeroes": true, 00:24:36.010 "flush": true, 00:24:36.010 "reset": true, 00:24:36.010 "compare": false, 00:24:36.010 "compare_and_write": false, 00:24:36.010 "abort": false, 00:24:36.010 "nvme_admin": false, 00:24:36.010 "nvme_io": false 00:24:36.010 }, 00:24:36.010 "memory_domains": [ 00:24:36.010 { 00:24:36.010 "dma_device_id": "system", 00:24:36.010 "dma_device_type": 1 00:24:36.010 }, 00:24:36.010 { 00:24:36.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.010 "dma_device_type": 2 00:24:36.010 }, 00:24:36.010 { 00:24:36.010 "dma_device_id": "system", 00:24:36.010 "dma_device_type": 1 00:24:36.010 }, 00:24:36.010 { 00:24:36.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.010 "dma_device_type": 2 00:24:36.010 } 00:24:36.010 ], 00:24:36.010 "driver_specific": { 00:24:36.010 "raid": { 00:24:36.010 "uuid": "f52cb19d-f34a-4669-a08d-eeacfe885a7d", 00:24:36.010 "strip_size_kb": 64, 00:24:36.010 "state": "online", 00:24:36.010 "raid_level": "raid0", 00:24:36.011 "superblock": true, 00:24:36.011 "num_base_bdevs": 2, 00:24:36.011 "num_base_bdevs_discovered": 2, 00:24:36.011 "num_base_bdevs_operational": 2, 00:24:36.011 "base_bdevs_list": [ 00:24:36.011 { 00:24:36.011 "name": "pt1", 00:24:36.011 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:36.011 "is_configured": true, 00:24:36.011 "data_offset": 2048, 00:24:36.011 "data_size": 63488 00:24:36.011 }, 00:24:36.011 { 00:24:36.011 "name": "pt2", 00:24:36.011 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:36.011 "is_configured": true, 00:24:36.011 "data_offset": 2048, 00:24:36.011 "data_size": 63488 00:24:36.011 } 00:24:36.011 ] 00:24:36.011 } 00:24:36.011 } 00:24:36.011 }' 00:24:36.011 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:36.011 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:36.011 pt2' 00:24:36.011 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:36.011 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:36.011 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:36.269 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:36.269 "name": "pt1", 00:24:36.269 "aliases": [ 00:24:36.269 "d1159b46-20fb-522c-9918-925403750f47" 00:24:36.269 ], 00:24:36.269 "product_name": "passthru", 00:24:36.269 "block_size": 512, 00:24:36.269 "num_blocks": 65536, 00:24:36.269 "uuid": "d1159b46-20fb-522c-9918-925403750f47", 00:24:36.269 "assigned_rate_limits": { 00:24:36.269 "rw_ios_per_sec": 0, 00:24:36.269 "rw_mbytes_per_sec": 0, 00:24:36.269 "r_mbytes_per_sec": 0, 00:24:36.269 "w_mbytes_per_sec": 0 00:24:36.269 }, 00:24:36.269 "claimed": true, 00:24:36.269 "claim_type": "exclusive_write", 00:24:36.269 "zoned": false, 00:24:36.269 "supported_io_types": { 00:24:36.269 "read": true, 00:24:36.269 "write": true, 00:24:36.269 "unmap": true, 00:24:36.269 "write_zeroes": true, 00:24:36.269 "flush": true, 00:24:36.269 "reset": true, 00:24:36.269 "compare": false, 00:24:36.269 "compare_and_write": false, 00:24:36.269 "abort": true, 00:24:36.269 "nvme_admin": false, 00:24:36.269 "nvme_io": false 00:24:36.270 }, 00:24:36.270 "memory_domains": [ 00:24:36.270 { 00:24:36.270 "dma_device_id": "system", 00:24:36.270 "dma_device_type": 1 00:24:36.270 }, 00:24:36.270 { 00:24:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.270 "dma_device_type": 2 00:24:36.270 } 00:24:36.270 ], 00:24:36.270 "driver_specific": { 00:24:36.270 "passthru": { 00:24:36.270 "name": "pt1", 00:24:36.270 "base_bdev_name": "malloc1" 00:24:36.270 } 00:24:36.270 } 00:24:36.270 }' 00:24:36.270 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:36.270 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:36.270 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:36.270 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:36.528 11:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.528 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.786 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:36.786 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:36.786 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:36.786 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:37.045 "name": "pt2", 00:24:37.045 "aliases": [ 00:24:37.045 "be691fd0-3bbc-511a-ad37-8212b430feba" 00:24:37.045 ], 00:24:37.045 "product_name": "passthru", 00:24:37.045 "block_size": 512, 00:24:37.045 "num_blocks": 65536, 00:24:37.045 "uuid": "be691fd0-3bbc-511a-ad37-8212b430feba", 00:24:37.045 "assigned_rate_limits": { 00:24:37.045 "rw_ios_per_sec": 0, 00:24:37.045 "rw_mbytes_per_sec": 0, 00:24:37.045 "r_mbytes_per_sec": 0, 00:24:37.045 "w_mbytes_per_sec": 0 00:24:37.045 }, 00:24:37.045 "claimed": true, 00:24:37.045 "claim_type": "exclusive_write", 00:24:37.045 "zoned": false, 00:24:37.045 "supported_io_types": { 00:24:37.045 "read": true, 00:24:37.045 "write": true, 00:24:37.045 "unmap": true, 00:24:37.045 "write_zeroes": true, 00:24:37.045 "flush": true, 00:24:37.045 "reset": true, 00:24:37.045 "compare": false, 00:24:37.045 "compare_and_write": false, 00:24:37.045 "abort": true, 00:24:37.045 "nvme_admin": false, 00:24:37.045 "nvme_io": false 00:24:37.045 }, 00:24:37.045 "memory_domains": [ 00:24:37.045 { 00:24:37.045 "dma_device_id": "system", 00:24:37.045 "dma_device_type": 1 00:24:37.045 }, 00:24:37.045 { 00:24:37.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.045 "dma_device_type": 2 00:24:37.045 } 00:24:37.045 ], 00:24:37.045 "driver_specific": { 00:24:37.045 "passthru": { 00:24:37.045 "name": "pt2", 00:24:37.045 "base_bdev_name": "malloc2" 00:24:37.045 } 00:24:37.045 } 00:24:37.045 }' 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.045 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:37.303 11:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:37.562 [2024-05-15 11:17:56.015092] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f52cb19d-f34a-4669-a08d-eeacfe885a7d '!=' f52cb19d-f34a-4669-a08d-eeacfe885a7d ']' 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 53899 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 53899 ']' 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 53899 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53899 00:24:37.562 killing process with pid 53899 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53899' 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 53899 00:24:37.562 11:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 53899 00:24:37.562 [2024-05-15 11:17:56.053203] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:37.562 [2024-05-15 11:17:56.053273] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.562 [2024-05-15 11:17:56.053310] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.562 [2024-05-15 11:17:56.053321] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:24:37.820 [2024-05-15 11:17:56.226220] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.198 11:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:24:39.198 00:24:39.198 real 0m12.212s 00:24:39.198 user 0m21.612s 00:24:39.198 sys 0m1.305s 00:24:39.198 11:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:39.198 11:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.198 ************************************ 00:24:39.198 END TEST raid_superblock_test 00:24:39.198 ************************************ 00:24:39.198 11:17:57 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:24:39.198 11:17:57 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:24:39.198 11:17:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:39.198 11:17:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:39.198 11:17:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:39.198 ************************************ 00:24:39.198 START TEST raid_state_function_test 00:24:39.198 ************************************ 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:39.198 Process raid pid: 54274 00:24:39.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=54274 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54274' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 54274 /var/tmp/spdk-raid.sock 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 54274 ']' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:39.198 11:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.198 [2024-05-15 11:17:57.760107] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:24:39.198 [2024-05-15 11:17:57.760405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.457 [2024-05-15 11:17:57.930084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.716 [2024-05-15 11:17:58.165180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.974 [2024-05-15 11:17:58.412903] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.974 11:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:39.974 11:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:24:39.974 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:40.233 [2024-05-15 11:17:58.813195] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:40.233 [2024-05-15 11:17:58.813292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:40.233 [2024-05-15 11:17:58.813312] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:40.233 [2024-05-15 11:17:58.813336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.233 11:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.491 11:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.491 "name": "Existed_Raid", 00:24:40.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.491 "strip_size_kb": 64, 00:24:40.491 "state": "configuring", 00:24:40.491 "raid_level": "concat", 00:24:40.491 "superblock": false, 00:24:40.491 "num_base_bdevs": 2, 00:24:40.491 "num_base_bdevs_discovered": 0, 00:24:40.491 "num_base_bdevs_operational": 2, 00:24:40.491 "base_bdevs_list": [ 00:24:40.491 { 00:24:40.492 "name": "BaseBdev1", 00:24:40.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.492 "is_configured": false, 00:24:40.492 "data_offset": 0, 00:24:40.492 "data_size": 0 00:24:40.492 }, 00:24:40.492 { 00:24:40.492 "name": "BaseBdev2", 00:24:40.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.492 "is_configured": false, 00:24:40.492 "data_offset": 0, 00:24:40.492 "data_size": 0 00:24:40.492 } 00:24:40.492 ] 00:24:40.492 }' 00:24:40.492 11:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.492 11:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.426 11:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:41.426 [2024-05-15 11:17:59.929265] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:41.426 [2024-05-15 11:17:59.929318] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:24:41.426 11:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:41.684 [2024-05-15 11:18:00.125254] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:41.684 [2024-05-15 11:18:00.125360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:41.684 [2024-05-15 11:18:00.125376] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:41.684 [2024-05-15 11:18:00.125403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:41.684 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:41.943 BaseBdev1 00:24:41.943 [2024-05-15 11:18:00.394080] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:41.943 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:42.201 [ 00:24:42.201 { 00:24:42.201 "name": "BaseBdev1", 00:24:42.201 "aliases": [ 00:24:42.201 "01b494a9-2454-4e9d-8fda-d6ff0e014d14" 00:24:42.201 ], 00:24:42.201 "product_name": "Malloc disk", 00:24:42.201 "block_size": 512, 00:24:42.201 "num_blocks": 65536, 00:24:42.201 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:42.201 "assigned_rate_limits": { 00:24:42.201 "rw_ios_per_sec": 0, 00:24:42.201 "rw_mbytes_per_sec": 0, 00:24:42.201 "r_mbytes_per_sec": 0, 00:24:42.201 "w_mbytes_per_sec": 0 00:24:42.201 }, 00:24:42.201 "claimed": true, 00:24:42.201 "claim_type": "exclusive_write", 00:24:42.201 "zoned": false, 00:24:42.201 "supported_io_types": { 00:24:42.201 "read": true, 00:24:42.201 "write": true, 00:24:42.201 "unmap": true, 00:24:42.201 "write_zeroes": true, 00:24:42.201 "flush": true, 00:24:42.201 "reset": true, 00:24:42.201 "compare": false, 00:24:42.201 "compare_and_write": false, 00:24:42.201 "abort": true, 00:24:42.201 "nvme_admin": false, 00:24:42.201 "nvme_io": false 00:24:42.201 }, 00:24:42.201 "memory_domains": [ 00:24:42.201 { 00:24:42.201 "dma_device_id": "system", 00:24:42.201 "dma_device_type": 1 00:24:42.201 }, 00:24:42.201 { 00:24:42.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.201 "dma_device_type": 2 00:24:42.201 } 00:24:42.201 ], 00:24:42.201 "driver_specific": {} 00:24:42.201 } 00:24:42.201 ] 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.201 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.202 11:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.460 11:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:42.460 "name": "Existed_Raid", 00:24:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.460 "strip_size_kb": 64, 00:24:42.460 "state": "configuring", 00:24:42.460 "raid_level": "concat", 00:24:42.460 "superblock": false, 00:24:42.460 "num_base_bdevs": 2, 00:24:42.460 "num_base_bdevs_discovered": 1, 00:24:42.460 "num_base_bdevs_operational": 2, 00:24:42.460 "base_bdevs_list": [ 00:24:42.460 { 00:24:42.460 "name": "BaseBdev1", 00:24:42.460 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:42.460 "is_configured": true, 00:24:42.460 "data_offset": 0, 00:24:42.460 "data_size": 65536 00:24:42.460 }, 00:24:42.460 { 00:24:42.460 "name": "BaseBdev2", 00:24:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.460 "is_configured": false, 00:24:42.460 "data_offset": 0, 00:24:42.460 "data_size": 0 00:24:42.460 } 00:24:42.460 ] 00:24:42.460 }' 00:24:42.460 11:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:42.460 11:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.026 11:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:43.283 [2024-05-15 11:18:01.846302] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:43.283 [2024-05-15 11:18:01.846415] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:24:43.284 11:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:43.541 [2024-05-15 11:18:02.086390] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.541 [2024-05-15 11:18:02.088530] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:43.541 [2024-05-15 11:18:02.088630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.541 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.799 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.799 "name": "Existed_Raid", 00:24:43.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.799 "strip_size_kb": 64, 00:24:43.799 "state": "configuring", 00:24:43.799 "raid_level": "concat", 00:24:43.799 "superblock": false, 00:24:43.799 "num_base_bdevs": 2, 00:24:43.799 "num_base_bdevs_discovered": 1, 00:24:43.799 "num_base_bdevs_operational": 2, 00:24:43.799 "base_bdevs_list": [ 00:24:43.799 { 00:24:43.799 "name": "BaseBdev1", 00:24:43.799 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:43.799 "is_configured": true, 00:24:43.799 "data_offset": 0, 00:24:43.799 "data_size": 65536 00:24:43.799 }, 00:24:43.799 { 00:24:43.799 "name": "BaseBdev2", 00:24:43.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.799 "is_configured": false, 00:24:43.799 "data_offset": 0, 00:24:43.799 "data_size": 0 00:24:43.799 } 00:24:43.799 ] 00:24:43.799 }' 00:24:43.799 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.799 11:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.365 11:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:44.624 [2024-05-15 11:18:03.249430] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.624 [2024-05-15 11:18:03.249483] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:24:44.624 [2024-05-15 11:18:03.249495] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:44.624 [2024-05-15 11:18:03.249628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:24:44.624 BaseBdev2 00:24:44.624 [2024-05-15 11:18:03.250144] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:24:44.624 [2024-05-15 11:18:03.250167] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:24:44.624 [2024-05-15 11:18:03.250388] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.881 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:45.138 [ 00:24:45.138 { 00:24:45.138 "name": "BaseBdev2", 00:24:45.138 "aliases": [ 00:24:45.138 "d414bb45-02c7-42be-9e5d-6d93c4143061" 00:24:45.138 ], 00:24:45.138 "product_name": "Malloc disk", 00:24:45.138 "block_size": 512, 00:24:45.138 "num_blocks": 65536, 00:24:45.138 "uuid": "d414bb45-02c7-42be-9e5d-6d93c4143061", 00:24:45.138 "assigned_rate_limits": { 00:24:45.138 "rw_ios_per_sec": 0, 00:24:45.138 "rw_mbytes_per_sec": 0, 00:24:45.138 "r_mbytes_per_sec": 0, 00:24:45.138 "w_mbytes_per_sec": 0 00:24:45.138 }, 00:24:45.138 "claimed": true, 00:24:45.138 "claim_type": "exclusive_write", 00:24:45.138 "zoned": false, 00:24:45.138 "supported_io_types": { 00:24:45.138 "read": true, 00:24:45.138 "write": true, 00:24:45.138 "unmap": true, 00:24:45.138 "write_zeroes": true, 00:24:45.138 "flush": true, 00:24:45.138 "reset": true, 00:24:45.138 "compare": false, 00:24:45.138 "compare_and_write": false, 00:24:45.138 "abort": true, 00:24:45.138 "nvme_admin": false, 00:24:45.138 "nvme_io": false 00:24:45.138 }, 00:24:45.138 "memory_domains": [ 00:24:45.138 { 00:24:45.138 "dma_device_id": "system", 00:24:45.138 "dma_device_type": 1 00:24:45.138 }, 00:24:45.138 { 00:24:45.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.138 "dma_device_type": 2 00:24:45.138 } 00:24:45.138 ], 00:24:45.138 "driver_specific": {} 00:24:45.138 } 00:24:45.138 ] 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.138 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.396 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.396 "name": "Existed_Raid", 00:24:45.396 "uuid": "899afcfd-65c6-4df5-997f-2adcb42a8b2b", 00:24:45.396 "strip_size_kb": 64, 00:24:45.396 "state": "online", 00:24:45.396 "raid_level": "concat", 00:24:45.396 "superblock": false, 00:24:45.396 "num_base_bdevs": 2, 00:24:45.396 "num_base_bdevs_discovered": 2, 00:24:45.396 "num_base_bdevs_operational": 2, 00:24:45.396 "base_bdevs_list": [ 00:24:45.396 { 00:24:45.396 "name": "BaseBdev1", 00:24:45.396 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:45.396 "is_configured": true, 00:24:45.396 "data_offset": 0, 00:24:45.396 "data_size": 65536 00:24:45.396 }, 00:24:45.396 { 00:24:45.396 "name": "BaseBdev2", 00:24:45.396 "uuid": "d414bb45-02c7-42be-9e5d-6d93c4143061", 00:24:45.396 "is_configured": true, 00:24:45.397 "data_offset": 0, 00:24:45.397 "data_size": 65536 00:24:45.397 } 00:24:45.397 ] 00:24:45.397 }' 00:24:45.397 11:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.397 11:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:46.335 [2024-05-15 11:18:04.857931] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:46.335 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:46.335 "name": "Existed_Raid", 00:24:46.335 "aliases": [ 00:24:46.335 "899afcfd-65c6-4df5-997f-2adcb42a8b2b" 00:24:46.335 ], 00:24:46.335 "product_name": "Raid Volume", 00:24:46.335 "block_size": 512, 00:24:46.335 "num_blocks": 131072, 00:24:46.335 "uuid": "899afcfd-65c6-4df5-997f-2adcb42a8b2b", 00:24:46.335 "assigned_rate_limits": { 00:24:46.335 "rw_ios_per_sec": 0, 00:24:46.335 "rw_mbytes_per_sec": 0, 00:24:46.335 "r_mbytes_per_sec": 0, 00:24:46.335 "w_mbytes_per_sec": 0 00:24:46.335 }, 00:24:46.335 "claimed": false, 00:24:46.335 "zoned": false, 00:24:46.335 "supported_io_types": { 00:24:46.335 "read": true, 00:24:46.335 "write": true, 00:24:46.335 "unmap": true, 00:24:46.335 "write_zeroes": true, 00:24:46.335 "flush": true, 00:24:46.335 "reset": true, 00:24:46.335 "compare": false, 00:24:46.335 "compare_and_write": false, 00:24:46.335 "abort": false, 00:24:46.335 "nvme_admin": false, 00:24:46.335 "nvme_io": false 00:24:46.335 }, 00:24:46.335 "memory_domains": [ 00:24:46.335 { 00:24:46.335 "dma_device_id": "system", 00:24:46.335 "dma_device_type": 1 00:24:46.335 }, 00:24:46.335 { 00:24:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.335 "dma_device_type": 2 00:24:46.335 }, 00:24:46.335 { 00:24:46.335 "dma_device_id": "system", 00:24:46.335 "dma_device_type": 1 00:24:46.335 }, 00:24:46.335 { 00:24:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.335 "dma_device_type": 2 00:24:46.335 } 00:24:46.335 ], 00:24:46.335 "driver_specific": { 00:24:46.335 "raid": { 00:24:46.335 "uuid": "899afcfd-65c6-4df5-997f-2adcb42a8b2b", 00:24:46.335 "strip_size_kb": 64, 00:24:46.335 "state": "online", 00:24:46.335 "raid_level": "concat", 00:24:46.335 "superblock": false, 00:24:46.335 "num_base_bdevs": 2, 00:24:46.335 "num_base_bdevs_discovered": 2, 00:24:46.335 "num_base_bdevs_operational": 2, 00:24:46.335 "base_bdevs_list": [ 00:24:46.335 { 00:24:46.335 "name": "BaseBdev1", 00:24:46.335 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:46.335 "is_configured": true, 00:24:46.335 "data_offset": 0, 00:24:46.335 "data_size": 65536 00:24:46.335 }, 00:24:46.335 { 00:24:46.335 "name": "BaseBdev2", 00:24:46.336 "uuid": "d414bb45-02c7-42be-9e5d-6d93c4143061", 00:24:46.336 "is_configured": true, 00:24:46.336 "data_offset": 0, 00:24:46.336 "data_size": 65536 00:24:46.336 } 00:24:46.336 ] 00:24:46.336 } 00:24:46.336 } 00:24:46.336 }' 00:24:46.336 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:46.336 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:46.336 BaseBdev2' 00:24:46.336 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:46.336 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:46.336 11:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:46.594 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:46.594 "name": "BaseBdev1", 00:24:46.594 "aliases": [ 00:24:46.594 "01b494a9-2454-4e9d-8fda-d6ff0e014d14" 00:24:46.594 ], 00:24:46.594 "product_name": "Malloc disk", 00:24:46.594 "block_size": 512, 00:24:46.594 "num_blocks": 65536, 00:24:46.594 "uuid": "01b494a9-2454-4e9d-8fda-d6ff0e014d14", 00:24:46.594 "assigned_rate_limits": { 00:24:46.594 "rw_ios_per_sec": 0, 00:24:46.594 "rw_mbytes_per_sec": 0, 00:24:46.594 "r_mbytes_per_sec": 0, 00:24:46.594 "w_mbytes_per_sec": 0 00:24:46.594 }, 00:24:46.594 "claimed": true, 00:24:46.594 "claim_type": "exclusive_write", 00:24:46.594 "zoned": false, 00:24:46.594 "supported_io_types": { 00:24:46.594 "read": true, 00:24:46.594 "write": true, 00:24:46.594 "unmap": true, 00:24:46.594 "write_zeroes": true, 00:24:46.594 "flush": true, 00:24:46.594 "reset": true, 00:24:46.594 "compare": false, 00:24:46.594 "compare_and_write": false, 00:24:46.594 "abort": true, 00:24:46.594 "nvme_admin": false, 00:24:46.594 "nvme_io": false 00:24:46.594 }, 00:24:46.594 "memory_domains": [ 00:24:46.594 { 00:24:46.594 "dma_device_id": "system", 00:24:46.594 "dma_device_type": 1 00:24:46.594 }, 00:24:46.594 { 00:24:46.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.594 "dma_device_type": 2 00:24:46.594 } 00:24:46.594 ], 00:24:46.594 "driver_specific": {} 00:24:46.594 }' 00:24:46.594 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.594 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.852 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:47.110 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:47.369 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:47.369 "name": "BaseBdev2", 00:24:47.369 "aliases": [ 00:24:47.369 "d414bb45-02c7-42be-9e5d-6d93c4143061" 00:24:47.369 ], 00:24:47.369 "product_name": "Malloc disk", 00:24:47.369 "block_size": 512, 00:24:47.369 "num_blocks": 65536, 00:24:47.369 "uuid": "d414bb45-02c7-42be-9e5d-6d93c4143061", 00:24:47.369 "assigned_rate_limits": { 00:24:47.369 "rw_ios_per_sec": 0, 00:24:47.369 "rw_mbytes_per_sec": 0, 00:24:47.369 "r_mbytes_per_sec": 0, 00:24:47.369 "w_mbytes_per_sec": 0 00:24:47.369 }, 00:24:47.369 "claimed": true, 00:24:47.369 "claim_type": "exclusive_write", 00:24:47.369 "zoned": false, 00:24:47.369 "supported_io_types": { 00:24:47.369 "read": true, 00:24:47.369 "write": true, 00:24:47.369 "unmap": true, 00:24:47.369 "write_zeroes": true, 00:24:47.369 "flush": true, 00:24:47.369 "reset": true, 00:24:47.369 "compare": false, 00:24:47.369 "compare_and_write": false, 00:24:47.369 "abort": true, 00:24:47.369 "nvme_admin": false, 00:24:47.369 "nvme_io": false 00:24:47.369 }, 00:24:47.369 "memory_domains": [ 00:24:47.369 { 00:24:47.369 "dma_device_id": "system", 00:24:47.369 "dma_device_type": 1 00:24:47.369 }, 00:24:47.369 { 00:24:47.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.369 "dma_device_type": 2 00:24:47.369 } 00:24:47.369 ], 00:24:47.369 "driver_specific": {} 00:24:47.369 }' 00:24:47.369 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:47.369 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:47.369 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:47.369 11:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.627 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.886 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.886 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:47.886 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:48.144 [2024-05-15 11:18:06.522124] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:48.144 [2024-05-15 11:18:06.522165] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.144 [2024-05-15 11:18:06.522227] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.144 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.402 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.402 "name": "Existed_Raid", 00:24:48.402 "uuid": "899afcfd-65c6-4df5-997f-2adcb42a8b2b", 00:24:48.402 "strip_size_kb": 64, 00:24:48.402 "state": "offline", 00:24:48.402 "raid_level": "concat", 00:24:48.402 "superblock": false, 00:24:48.402 "num_base_bdevs": 2, 00:24:48.402 "num_base_bdevs_discovered": 1, 00:24:48.402 "num_base_bdevs_operational": 1, 00:24:48.402 "base_bdevs_list": [ 00:24:48.402 { 00:24:48.402 "name": null, 00:24:48.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.402 "is_configured": false, 00:24:48.402 "data_offset": 0, 00:24:48.402 "data_size": 65536 00:24:48.402 }, 00:24:48.402 { 00:24:48.402 "name": "BaseBdev2", 00:24:48.402 "uuid": "d414bb45-02c7-42be-9e5d-6d93c4143061", 00:24:48.402 "is_configured": true, 00:24:48.402 "data_offset": 0, 00:24:48.402 "data_size": 65536 00:24:48.402 } 00:24:48.402 ] 00:24:48.402 }' 00:24:48.402 11:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.402 11:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.968 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:48.968 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:48.968 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.968 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:49.226 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:49.226 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.226 11:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:49.518 [2024-05-15 11:18:08.074866] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:49.518 [2024-05-15 11:18:08.074938] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 54274 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 54274 ']' 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 54274 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.799 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 54274 00:24:50.058 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:50.058 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:50.058 killing process with pid 54274 00:24:50.058 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54274' 00:24:50.058 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 54274 00:24:50.058 11:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 54274 00:24:50.058 [2024-05-15 11:18:08.438143] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.058 [2024-05-15 11:18:08.438251] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:51.434 ************************************ 00:24:51.434 END TEST raid_state_function_test 00:24:51.434 ************************************ 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:24:51.435 00:24:51.435 real 0m12.077s 00:24:51.435 user 0m21.405s 00:24:51.435 sys 0m1.272s 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.435 11:18:09 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:24:51.435 11:18:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:51.435 11:18:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:51.435 11:18:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:51.435 ************************************ 00:24:51.435 START TEST raid_state_function_test_sb 00:24:51.435 ************************************ 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:51.435 Process raid pid: 54663 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=54663 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54663' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 54663 /var/tmp/spdk-raid.sock 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 54663 ']' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:51.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:51.435 11:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.435 [2024-05-15 11:18:09.902632] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:24:51.435 [2024-05-15 11:18:09.902853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.693 [2024-05-15 11:18:10.072837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.693 [2024-05-15 11:18:10.293206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.951 [2024-05-15 11:18:10.499352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:52.209 11:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:52.209 11:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:24:52.209 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:52.468 [2024-05-15 11:18:10.912212] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:52.468 [2024-05-15 11:18:10.912329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:52.468 [2024-05-15 11:18:10.912348] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:52.468 [2024-05-15 11:18:10.912370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.468 11:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.727 11:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.727 "name": "Existed_Raid", 00:24:52.727 "uuid": "00f46618-626a-46fe-9c41-430486fdc932", 00:24:52.727 "strip_size_kb": 64, 00:24:52.727 "state": "configuring", 00:24:52.727 "raid_level": "concat", 00:24:52.727 "superblock": true, 00:24:52.727 "num_base_bdevs": 2, 00:24:52.727 "num_base_bdevs_discovered": 0, 00:24:52.727 "num_base_bdevs_operational": 2, 00:24:52.727 "base_bdevs_list": [ 00:24:52.727 { 00:24:52.727 "name": "BaseBdev1", 00:24:52.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.727 "is_configured": false, 00:24:52.727 "data_offset": 0, 00:24:52.727 "data_size": 0 00:24:52.727 }, 00:24:52.727 { 00:24:52.727 "name": "BaseBdev2", 00:24:52.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.727 "is_configured": false, 00:24:52.727 "data_offset": 0, 00:24:52.727 "data_size": 0 00:24:52.727 } 00:24:52.727 ] 00:24:52.727 }' 00:24:52.727 11:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.727 11:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.293 11:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:53.551 [2024-05-15 11:18:11.980116] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:53.551 [2024-05-15 11:18:11.980189] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:24:53.551 11:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:53.810 [2024-05-15 11:18:12.228162] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:53.810 [2024-05-15 11:18:12.228269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:53.810 [2024-05-15 11:18:12.228286] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:53.810 [2024-05-15 11:18:12.228314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:53.810 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:54.068 [2024-05-15 11:18:12.496740] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.068 BaseBdev1 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:54.068 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:54.327 [ 00:24:54.327 { 00:24:54.327 "name": "BaseBdev1", 00:24:54.327 "aliases": [ 00:24:54.327 "a1291099-60f3-4332-866f-fc91e940c479" 00:24:54.327 ], 00:24:54.327 "product_name": "Malloc disk", 00:24:54.327 "block_size": 512, 00:24:54.327 "num_blocks": 65536, 00:24:54.327 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:54.327 "assigned_rate_limits": { 00:24:54.327 "rw_ios_per_sec": 0, 00:24:54.327 "rw_mbytes_per_sec": 0, 00:24:54.327 "r_mbytes_per_sec": 0, 00:24:54.327 "w_mbytes_per_sec": 0 00:24:54.327 }, 00:24:54.327 "claimed": true, 00:24:54.327 "claim_type": "exclusive_write", 00:24:54.327 "zoned": false, 00:24:54.327 "supported_io_types": { 00:24:54.327 "read": true, 00:24:54.327 "write": true, 00:24:54.327 "unmap": true, 00:24:54.327 "write_zeroes": true, 00:24:54.327 "flush": true, 00:24:54.327 "reset": true, 00:24:54.327 "compare": false, 00:24:54.327 "compare_and_write": false, 00:24:54.327 "abort": true, 00:24:54.327 "nvme_admin": false, 00:24:54.327 "nvme_io": false 00:24:54.327 }, 00:24:54.327 "memory_domains": [ 00:24:54.327 { 00:24:54.327 "dma_device_id": "system", 00:24:54.327 "dma_device_type": 1 00:24:54.327 }, 00:24:54.327 { 00:24:54.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.327 "dma_device_type": 2 00:24:54.327 } 00:24:54.327 ], 00:24:54.327 "driver_specific": {} 00:24:54.327 } 00:24:54.327 ] 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.327 11:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.585 11:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.585 "name": "Existed_Raid", 00:24:54.585 "uuid": "5208e10e-0ba8-4357-a923-c41b12bfc2b1", 00:24:54.585 "strip_size_kb": 64, 00:24:54.585 "state": "configuring", 00:24:54.585 "raid_level": "concat", 00:24:54.585 "superblock": true, 00:24:54.585 "num_base_bdevs": 2, 00:24:54.585 "num_base_bdevs_discovered": 1, 00:24:54.585 "num_base_bdevs_operational": 2, 00:24:54.585 "base_bdevs_list": [ 00:24:54.585 { 00:24:54.585 "name": "BaseBdev1", 00:24:54.585 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:54.585 "is_configured": true, 00:24:54.585 "data_offset": 2048, 00:24:54.585 "data_size": 63488 00:24:54.585 }, 00:24:54.585 { 00:24:54.585 "name": "BaseBdev2", 00:24:54.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.585 "is_configured": false, 00:24:54.585 "data_offset": 0, 00:24:54.585 "data_size": 0 00:24:54.585 } 00:24:54.585 ] 00:24:54.585 }' 00:24:54.585 11:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.585 11:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.519 11:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:55.519 [2024-05-15 11:18:14.057080] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:55.519 [2024-05-15 11:18:14.057150] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:24:55.520 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:55.778 [2024-05-15 11:18:14.265179] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.778 [2024-05-15 11:18:14.266614] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:55.778 [2024-05-15 11:18:14.266680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.778 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.036 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.036 "name": "Existed_Raid", 00:24:56.036 "uuid": "1266898f-1153-4901-b6c4-14e537a75b57", 00:24:56.036 "strip_size_kb": 64, 00:24:56.036 "state": "configuring", 00:24:56.036 "raid_level": "concat", 00:24:56.036 "superblock": true, 00:24:56.036 "num_base_bdevs": 2, 00:24:56.036 "num_base_bdevs_discovered": 1, 00:24:56.036 "num_base_bdevs_operational": 2, 00:24:56.036 "base_bdevs_list": [ 00:24:56.036 { 00:24:56.036 "name": "BaseBdev1", 00:24:56.036 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:56.036 "is_configured": true, 00:24:56.036 "data_offset": 2048, 00:24:56.036 "data_size": 63488 00:24:56.036 }, 00:24:56.036 { 00:24:56.036 "name": "BaseBdev2", 00:24:56.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.036 "is_configured": false, 00:24:56.036 "data_offset": 0, 00:24:56.036 "data_size": 0 00:24:56.036 } 00:24:56.036 ] 00:24:56.036 }' 00:24:56.036 11:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.036 11:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.603 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:56.861 [2024-05-15 11:18:15.420336] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:56.861 [2024-05-15 11:18:15.420513] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:24:56.861 [2024-05-15 11:18:15.420530] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:56.861 [2024-05-15 11:18:15.420648] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:24:56.861 BaseBdev2 00:24:56.861 [2024-05-15 11:18:15.420899] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:24:56.861 [2024-05-15 11:18:15.420923] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:24:56.861 [2024-05-15 11:18:15.421081] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:56.861 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:57.120 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:57.379 [ 00:24:57.379 { 00:24:57.379 "name": "BaseBdev2", 00:24:57.379 "aliases": [ 00:24:57.379 "c2b01b46-96b0-49bc-9a07-27685db37651" 00:24:57.379 ], 00:24:57.379 "product_name": "Malloc disk", 00:24:57.379 "block_size": 512, 00:24:57.379 "num_blocks": 65536, 00:24:57.379 "uuid": "c2b01b46-96b0-49bc-9a07-27685db37651", 00:24:57.379 "assigned_rate_limits": { 00:24:57.379 "rw_ios_per_sec": 0, 00:24:57.379 "rw_mbytes_per_sec": 0, 00:24:57.379 "r_mbytes_per_sec": 0, 00:24:57.379 "w_mbytes_per_sec": 0 00:24:57.379 }, 00:24:57.379 "claimed": true, 00:24:57.379 "claim_type": "exclusive_write", 00:24:57.379 "zoned": false, 00:24:57.379 "supported_io_types": { 00:24:57.379 "read": true, 00:24:57.379 "write": true, 00:24:57.379 "unmap": true, 00:24:57.379 "write_zeroes": true, 00:24:57.379 "flush": true, 00:24:57.379 "reset": true, 00:24:57.379 "compare": false, 00:24:57.379 "compare_and_write": false, 00:24:57.379 "abort": true, 00:24:57.379 "nvme_admin": false, 00:24:57.379 "nvme_io": false 00:24:57.379 }, 00:24:57.379 "memory_domains": [ 00:24:57.379 { 00:24:57.379 "dma_device_id": "system", 00:24:57.379 "dma_device_type": 1 00:24:57.379 }, 00:24:57.379 { 00:24:57.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.379 "dma_device_type": 2 00:24:57.379 } 00:24:57.379 ], 00:24:57.379 "driver_specific": {} 00:24:57.379 } 00:24:57.379 ] 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.379 11:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.638 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.638 "name": "Existed_Raid", 00:24:57.638 "uuid": "1266898f-1153-4901-b6c4-14e537a75b57", 00:24:57.638 "strip_size_kb": 64, 00:24:57.638 "state": "online", 00:24:57.638 "raid_level": "concat", 00:24:57.638 "superblock": true, 00:24:57.638 "num_base_bdevs": 2, 00:24:57.638 "num_base_bdevs_discovered": 2, 00:24:57.638 "num_base_bdevs_operational": 2, 00:24:57.638 "base_bdevs_list": [ 00:24:57.638 { 00:24:57.638 "name": "BaseBdev1", 00:24:57.638 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:57.638 "is_configured": true, 00:24:57.638 "data_offset": 2048, 00:24:57.638 "data_size": 63488 00:24:57.638 }, 00:24:57.638 { 00:24:57.638 "name": "BaseBdev2", 00:24:57.638 "uuid": "c2b01b46-96b0-49bc-9a07-27685db37651", 00:24:57.638 "is_configured": true, 00:24:57.638 "data_offset": 2048, 00:24:57.638 "data_size": 63488 00:24:57.638 } 00:24:57.638 ] 00:24:57.638 }' 00:24:57.638 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.638 11:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:58.205 11:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:58.464 [2024-05-15 11:18:17.048860] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:58.464 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:58.464 "name": "Existed_Raid", 00:24:58.464 "aliases": [ 00:24:58.464 "1266898f-1153-4901-b6c4-14e537a75b57" 00:24:58.464 ], 00:24:58.464 "product_name": "Raid Volume", 00:24:58.464 "block_size": 512, 00:24:58.464 "num_blocks": 126976, 00:24:58.464 "uuid": "1266898f-1153-4901-b6c4-14e537a75b57", 00:24:58.464 "assigned_rate_limits": { 00:24:58.464 "rw_ios_per_sec": 0, 00:24:58.464 "rw_mbytes_per_sec": 0, 00:24:58.464 "r_mbytes_per_sec": 0, 00:24:58.464 "w_mbytes_per_sec": 0 00:24:58.464 }, 00:24:58.464 "claimed": false, 00:24:58.464 "zoned": false, 00:24:58.464 "supported_io_types": { 00:24:58.464 "read": true, 00:24:58.464 "write": true, 00:24:58.464 "unmap": true, 00:24:58.464 "write_zeroes": true, 00:24:58.464 "flush": true, 00:24:58.464 "reset": true, 00:24:58.464 "compare": false, 00:24:58.464 "compare_and_write": false, 00:24:58.464 "abort": false, 00:24:58.464 "nvme_admin": false, 00:24:58.464 "nvme_io": false 00:24:58.464 }, 00:24:58.464 "memory_domains": [ 00:24:58.464 { 00:24:58.464 "dma_device_id": "system", 00:24:58.464 "dma_device_type": 1 00:24:58.464 }, 00:24:58.464 { 00:24:58.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.464 "dma_device_type": 2 00:24:58.464 }, 00:24:58.464 { 00:24:58.464 "dma_device_id": "system", 00:24:58.464 "dma_device_type": 1 00:24:58.464 }, 00:24:58.464 { 00:24:58.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.464 "dma_device_type": 2 00:24:58.464 } 00:24:58.464 ], 00:24:58.464 "driver_specific": { 00:24:58.464 "raid": { 00:24:58.464 "uuid": "1266898f-1153-4901-b6c4-14e537a75b57", 00:24:58.464 "strip_size_kb": 64, 00:24:58.464 "state": "online", 00:24:58.464 "raid_level": "concat", 00:24:58.464 "superblock": true, 00:24:58.464 "num_base_bdevs": 2, 00:24:58.464 "num_base_bdevs_discovered": 2, 00:24:58.464 "num_base_bdevs_operational": 2, 00:24:58.464 "base_bdevs_list": [ 00:24:58.464 { 00:24:58.464 "name": "BaseBdev1", 00:24:58.464 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:58.464 "is_configured": true, 00:24:58.464 "data_offset": 2048, 00:24:58.464 "data_size": 63488 00:24:58.464 }, 00:24:58.464 { 00:24:58.464 "name": "BaseBdev2", 00:24:58.464 "uuid": "c2b01b46-96b0-49bc-9a07-27685db37651", 00:24:58.464 "is_configured": true, 00:24:58.464 "data_offset": 2048, 00:24:58.464 "data_size": 63488 00:24:58.464 } 00:24:58.464 ] 00:24:58.464 } 00:24:58.464 } 00:24:58.464 }' 00:24:58.464 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:58.723 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:58.723 BaseBdev2' 00:24:58.723 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:58.723 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:58.723 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:58.981 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:58.981 "name": "BaseBdev1", 00:24:58.981 "aliases": [ 00:24:58.981 "a1291099-60f3-4332-866f-fc91e940c479" 00:24:58.981 ], 00:24:58.981 "product_name": "Malloc disk", 00:24:58.981 "block_size": 512, 00:24:58.981 "num_blocks": 65536, 00:24:58.981 "uuid": "a1291099-60f3-4332-866f-fc91e940c479", 00:24:58.981 "assigned_rate_limits": { 00:24:58.981 "rw_ios_per_sec": 0, 00:24:58.981 "rw_mbytes_per_sec": 0, 00:24:58.981 "r_mbytes_per_sec": 0, 00:24:58.981 "w_mbytes_per_sec": 0 00:24:58.981 }, 00:24:58.981 "claimed": true, 00:24:58.981 "claim_type": "exclusive_write", 00:24:58.981 "zoned": false, 00:24:58.981 "supported_io_types": { 00:24:58.981 "read": true, 00:24:58.981 "write": true, 00:24:58.981 "unmap": true, 00:24:58.981 "write_zeroes": true, 00:24:58.981 "flush": true, 00:24:58.981 "reset": true, 00:24:58.981 "compare": false, 00:24:58.981 "compare_and_write": false, 00:24:58.981 "abort": true, 00:24:58.981 "nvme_admin": false, 00:24:58.981 "nvme_io": false 00:24:58.981 }, 00:24:58.981 "memory_domains": [ 00:24:58.981 { 00:24:58.981 "dma_device_id": "system", 00:24:58.981 "dma_device_type": 1 00:24:58.981 }, 00:24:58.981 { 00:24:58.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.982 "dma_device_type": 2 00:24:58.982 } 00:24:58.982 ], 00:24:58.982 "driver_specific": {} 00:24:58.982 }' 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.982 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:59.240 11:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:59.499 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:59.499 "name": "BaseBdev2", 00:24:59.499 "aliases": [ 00:24:59.499 "c2b01b46-96b0-49bc-9a07-27685db37651" 00:24:59.499 ], 00:24:59.499 "product_name": "Malloc disk", 00:24:59.499 "block_size": 512, 00:24:59.499 "num_blocks": 65536, 00:24:59.499 "uuid": "c2b01b46-96b0-49bc-9a07-27685db37651", 00:24:59.499 "assigned_rate_limits": { 00:24:59.499 "rw_ios_per_sec": 0, 00:24:59.499 "rw_mbytes_per_sec": 0, 00:24:59.499 "r_mbytes_per_sec": 0, 00:24:59.499 "w_mbytes_per_sec": 0 00:24:59.499 }, 00:24:59.499 "claimed": true, 00:24:59.499 "claim_type": "exclusive_write", 00:24:59.499 "zoned": false, 00:24:59.499 "supported_io_types": { 00:24:59.499 "read": true, 00:24:59.499 "write": true, 00:24:59.499 "unmap": true, 00:24:59.499 "write_zeroes": true, 00:24:59.499 "flush": true, 00:24:59.499 "reset": true, 00:24:59.499 "compare": false, 00:24:59.499 "compare_and_write": false, 00:24:59.499 "abort": true, 00:24:59.499 "nvme_admin": false, 00:24:59.499 "nvme_io": false 00:24:59.499 }, 00:24:59.499 "memory_domains": [ 00:24:59.499 { 00:24:59.499 "dma_device_id": "system", 00:24:59.499 "dma_device_type": 1 00:24:59.499 }, 00:24:59.499 { 00:24:59.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.499 "dma_device_type": 2 00:24:59.499 } 00:24:59.499 ], 00:24:59.499 "driver_specific": {} 00:24:59.499 }' 00:24:59.499 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.757 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:00.014 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:00.272 [2024-05-15 11:18:18.821042] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:00.272 [2024-05-15 11:18:18.821086] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.272 [2024-05-15 11:18:18.821144] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:00.530 11:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.789 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:00.789 "name": "Existed_Raid", 00:25:00.789 "uuid": "1266898f-1153-4901-b6c4-14e537a75b57", 00:25:00.789 "strip_size_kb": 64, 00:25:00.789 "state": "offline", 00:25:00.789 "raid_level": "concat", 00:25:00.789 "superblock": true, 00:25:00.789 "num_base_bdevs": 2, 00:25:00.789 "num_base_bdevs_discovered": 1, 00:25:00.789 "num_base_bdevs_operational": 1, 00:25:00.789 "base_bdevs_list": [ 00:25:00.789 { 00:25:00.789 "name": null, 00:25:00.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.789 "is_configured": false, 00:25:00.789 "data_offset": 2048, 00:25:00.789 "data_size": 63488 00:25:00.789 }, 00:25:00.789 { 00:25:00.789 "name": "BaseBdev2", 00:25:00.789 "uuid": "c2b01b46-96b0-49bc-9a07-27685db37651", 00:25:00.789 "is_configured": true, 00:25:00.789 "data_offset": 2048, 00:25:00.789 "data_size": 63488 00:25:00.789 } 00:25:00.789 ] 00:25:00.789 }' 00:25:00.789 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:00.789 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.354 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:01.354 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:01.354 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.354 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:01.620 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:01.620 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:01.620 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:01.883 [2024-05-15 11:18:20.260334] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:01.883 [2024-05-15 11:18:20.260410] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:25:01.883 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:01.883 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:01.883 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.883 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 54663 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 54663 ']' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 54663 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 54663 00:25:02.141 killing process with pid 54663 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54663' 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 54663 00:25:02.141 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 54663 00:25:02.141 [2024-05-15 11:18:20.588315] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:02.141 [2024-05-15 11:18:20.588425] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:03.514 ************************************ 00:25:03.514 END TEST raid_state_function_test_sb 00:25:03.514 ************************************ 00:25:03.514 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:25:03.514 00:25:03.514 real 0m12.086s 00:25:03.514 user 0m21.358s 00:25:03.514 sys 0m1.355s 00:25:03.514 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:03.514 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.514 11:18:21 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:25:03.514 11:18:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:03.514 11:18:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:03.514 11:18:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:03.514 ************************************ 00:25:03.514 START TEST raid_superblock_test 00:25:03.514 ************************************ 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=55048 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 55048 /var/tmp/spdk-raid.sock 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 55048 ']' 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.514 11:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.514 [2024-05-15 11:18:22.008454] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:25:03.514 [2024-05-15 11:18:22.008638] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55048 ] 00:25:03.773 [2024-05-15 11:18:22.168907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.773 [2024-05-15 11:18:22.383417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.031 [2024-05-15 11:18:22.579648] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:04.289 11:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:04.547 malloc1 00:25:04.547 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:04.805 [2024-05-15 11:18:23.309930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:04.805 [2024-05-15 11:18:23.310037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.805 [2024-05-15 11:18:23.310097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:25:04.805 [2024-05-15 11:18:23.310145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.805 [2024-05-15 11:18:23.312281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.805 [2024-05-15 11:18:23.312331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:04.805 pt1 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:04.805 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:05.063 malloc2 00:25:05.063 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:05.321 [2024-05-15 11:18:23.732397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:05.321 [2024-05-15 11:18:23.732498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.321 [2024-05-15 11:18:23.732555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:25:05.321 [2024-05-15 11:18:23.732608] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.321 [2024-05-15 11:18:23.734665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.321 [2024-05-15 11:18:23.734723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:05.321 pt2 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:25:05.321 [2024-05-15 11:18:23.928560] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:05.321 [2024-05-15 11:18:23.930215] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:05.321 [2024-05-15 11:18:23.930366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:25:05.321 [2024-05-15 11:18:23.930499] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:05.321 [2024-05-15 11:18:23.930734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:05.321 [2024-05-15 11:18:23.931155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:25:05.321 [2024-05-15 11:18:23.931182] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:25:05.321 [2024-05-15 11:18:23.931423] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.321 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.322 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.322 11:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.579 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.579 "name": "raid_bdev1", 00:25:05.579 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:05.579 "strip_size_kb": 64, 00:25:05.579 "state": "online", 00:25:05.579 "raid_level": "concat", 00:25:05.579 "superblock": true, 00:25:05.579 "num_base_bdevs": 2, 00:25:05.579 "num_base_bdevs_discovered": 2, 00:25:05.579 "num_base_bdevs_operational": 2, 00:25:05.579 "base_bdevs_list": [ 00:25:05.579 { 00:25:05.579 "name": "pt1", 00:25:05.579 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:05.579 "is_configured": true, 00:25:05.579 "data_offset": 2048, 00:25:05.579 "data_size": 63488 00:25:05.579 }, 00:25:05.579 { 00:25:05.579 "name": "pt2", 00:25:05.579 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:05.579 "is_configured": true, 00:25:05.579 "data_offset": 2048, 00:25:05.579 "data_size": 63488 00:25:05.579 } 00:25:05.579 ] 00:25:05.579 }' 00:25:05.579 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.579 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.146 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:06.146 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:06.146 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:06.146 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:06.146 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:06.147 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:06.147 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:06.147 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:06.405 [2024-05-15 11:18:24.920774] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:06.405 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:06.405 "name": "raid_bdev1", 00:25:06.405 "aliases": [ 00:25:06.405 "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e" 00:25:06.405 ], 00:25:06.405 "product_name": "Raid Volume", 00:25:06.405 "block_size": 512, 00:25:06.405 "num_blocks": 126976, 00:25:06.405 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:06.405 "assigned_rate_limits": { 00:25:06.405 "rw_ios_per_sec": 0, 00:25:06.405 "rw_mbytes_per_sec": 0, 00:25:06.405 "r_mbytes_per_sec": 0, 00:25:06.405 "w_mbytes_per_sec": 0 00:25:06.405 }, 00:25:06.405 "claimed": false, 00:25:06.405 "zoned": false, 00:25:06.405 "supported_io_types": { 00:25:06.405 "read": true, 00:25:06.405 "write": true, 00:25:06.405 "unmap": true, 00:25:06.405 "write_zeroes": true, 00:25:06.405 "flush": true, 00:25:06.405 "reset": true, 00:25:06.405 "compare": false, 00:25:06.405 "compare_and_write": false, 00:25:06.405 "abort": false, 00:25:06.405 "nvme_admin": false, 00:25:06.405 "nvme_io": false 00:25:06.405 }, 00:25:06.405 "memory_domains": [ 00:25:06.405 { 00:25:06.405 "dma_device_id": "system", 00:25:06.405 "dma_device_type": 1 00:25:06.405 }, 00:25:06.405 { 00:25:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.405 "dma_device_type": 2 00:25:06.405 }, 00:25:06.405 { 00:25:06.405 "dma_device_id": "system", 00:25:06.405 "dma_device_type": 1 00:25:06.405 }, 00:25:06.405 { 00:25:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.405 "dma_device_type": 2 00:25:06.405 } 00:25:06.405 ], 00:25:06.405 "driver_specific": { 00:25:06.405 "raid": { 00:25:06.405 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:06.405 "strip_size_kb": 64, 00:25:06.405 "state": "online", 00:25:06.405 "raid_level": "concat", 00:25:06.405 "superblock": true, 00:25:06.405 "num_base_bdevs": 2, 00:25:06.405 "num_base_bdevs_discovered": 2, 00:25:06.405 "num_base_bdevs_operational": 2, 00:25:06.405 "base_bdevs_list": [ 00:25:06.405 { 00:25:06.405 "name": "pt1", 00:25:06.405 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:06.405 "is_configured": true, 00:25:06.405 "data_offset": 2048, 00:25:06.405 "data_size": 63488 00:25:06.405 }, 00:25:06.405 { 00:25:06.405 "name": "pt2", 00:25:06.405 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:06.405 "is_configured": true, 00:25:06.405 "data_offset": 2048, 00:25:06.405 "data_size": 63488 00:25:06.405 } 00:25:06.405 ] 00:25:06.405 } 00:25:06.405 } 00:25:06.405 }' 00:25:06.405 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:06.405 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:06.405 pt2' 00:25:06.405 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:06.405 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:06.405 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:06.663 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:06.663 "name": "pt1", 00:25:06.663 "aliases": [ 00:25:06.663 "8055d61a-e00e-5512-a7e6-8fd1f39bc371" 00:25:06.664 ], 00:25:06.664 "product_name": "passthru", 00:25:06.664 "block_size": 512, 00:25:06.664 "num_blocks": 65536, 00:25:06.664 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:06.664 "assigned_rate_limits": { 00:25:06.664 "rw_ios_per_sec": 0, 00:25:06.664 "rw_mbytes_per_sec": 0, 00:25:06.664 "r_mbytes_per_sec": 0, 00:25:06.664 "w_mbytes_per_sec": 0 00:25:06.664 }, 00:25:06.664 "claimed": true, 00:25:06.664 "claim_type": "exclusive_write", 00:25:06.664 "zoned": false, 00:25:06.664 "supported_io_types": { 00:25:06.664 "read": true, 00:25:06.664 "write": true, 00:25:06.664 "unmap": true, 00:25:06.664 "write_zeroes": true, 00:25:06.664 "flush": true, 00:25:06.664 "reset": true, 00:25:06.664 "compare": false, 00:25:06.664 "compare_and_write": false, 00:25:06.664 "abort": true, 00:25:06.664 "nvme_admin": false, 00:25:06.664 "nvme_io": false 00:25:06.664 }, 00:25:06.664 "memory_domains": [ 00:25:06.664 { 00:25:06.664 "dma_device_id": "system", 00:25:06.664 "dma_device_type": 1 00:25:06.664 }, 00:25:06.664 { 00:25:06.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.664 "dma_device_type": 2 00:25:06.664 } 00:25:06.664 ], 00:25:06.664 "driver_specific": { 00:25:06.664 "passthru": { 00:25:06.664 "name": "pt1", 00:25:06.664 "base_bdev_name": "malloc1" 00:25:06.664 } 00:25:06.664 } 00:25:06.664 }' 00:25:06.664 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:06.664 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:06.922 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:07.227 "name": "pt2", 00:25:07.227 "aliases": [ 00:25:07.227 "c92a6986-535d-5326-8063-b94f42553ed5" 00:25:07.227 ], 00:25:07.227 "product_name": "passthru", 00:25:07.227 "block_size": 512, 00:25:07.227 "num_blocks": 65536, 00:25:07.227 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:07.227 "assigned_rate_limits": { 00:25:07.227 "rw_ios_per_sec": 0, 00:25:07.227 "rw_mbytes_per_sec": 0, 00:25:07.227 "r_mbytes_per_sec": 0, 00:25:07.227 "w_mbytes_per_sec": 0 00:25:07.227 }, 00:25:07.227 "claimed": true, 00:25:07.227 "claim_type": "exclusive_write", 00:25:07.227 "zoned": false, 00:25:07.227 "supported_io_types": { 00:25:07.227 "read": true, 00:25:07.227 "write": true, 00:25:07.227 "unmap": true, 00:25:07.227 "write_zeroes": true, 00:25:07.227 "flush": true, 00:25:07.227 "reset": true, 00:25:07.227 "compare": false, 00:25:07.227 "compare_and_write": false, 00:25:07.227 "abort": true, 00:25:07.227 "nvme_admin": false, 00:25:07.227 "nvme_io": false 00:25:07.227 }, 00:25:07.227 "memory_domains": [ 00:25:07.227 { 00:25:07.227 "dma_device_id": "system", 00:25:07.227 "dma_device_type": 1 00:25:07.227 }, 00:25:07.227 { 00:25:07.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.227 "dma_device_type": 2 00:25:07.227 } 00:25:07.227 ], 00:25:07.227 "driver_specific": { 00:25:07.227 "passthru": { 00:25:07.227 "name": "pt2", 00:25:07.227 "base_bdev_name": "malloc2" 00:25:07.227 } 00:25:07.227 } 00:25:07.227 }' 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:07.227 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:07.486 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:07.486 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:07.486 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:07.486 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:07.486 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:07.486 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:07.486 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:07.486 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:07.744 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:07.744 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:07.744 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:07.744 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:08.002 [2024-05-15 11:18:26.440927] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.002 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e 00:25:08.002 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e ']' 00:25:08.002 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:08.260 [2024-05-15 11:18:26.684802] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:08.260 [2024-05-15 11:18:26.684852] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.260 [2024-05-15 11:18:26.684941] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.260 [2024-05-15 11:18:26.684989] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.260 [2024-05-15 11:18:26.685001] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:25:08.260 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.260 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:08.518 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:08.518 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:08.518 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:08.518 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:08.776 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:08.776 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:08.776 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:08.776 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:09.034 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:09.297 [2024-05-15 11:18:27.772977] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:09.297 [2024-05-15 11:18:27.774661] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:09.297 [2024-05-15 11:18:27.774727] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:09.297 [2024-05-15 11:18:27.774798] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:09.297 [2024-05-15 11:18:27.774855] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:09.297 [2024-05-15 11:18:27.774870] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:25:09.297 request: 00:25:09.297 { 00:25:09.297 "name": "raid_bdev1", 00:25:09.297 "raid_level": "concat", 00:25:09.297 "base_bdevs": [ 00:25:09.297 "malloc1", 00:25:09.297 "malloc2" 00:25:09.297 ], 00:25:09.297 "strip_size_kb": 64, 00:25:09.297 "superblock": false, 00:25:09.297 "method": "bdev_raid_create", 00:25:09.297 "req_id": 1 00:25:09.297 } 00:25:09.297 Got JSON-RPC error response 00:25:09.297 response: 00:25:09.297 { 00:25:09.297 "code": -17, 00:25:09.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:09.297 } 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.297 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:09.556 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:09.556 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:09.556 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:09.814 [2024-05-15 11:18:28.264972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:09.814 [2024-05-15 11:18:28.265097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.814 [2024-05-15 11:18:28.265198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:25:09.814 [2024-05-15 11:18:28.265235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.814 [2024-05-15 11:18:28.267197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.814 [2024-05-15 11:18:28.267265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:09.814 [2024-05-15 11:18:28.267357] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:09.814 [2024-05-15 11:18:28.267432] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:09.814 pt1 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.814 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.072 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.072 "name": "raid_bdev1", 00:25:10.072 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:10.072 "strip_size_kb": 64, 00:25:10.072 "state": "configuring", 00:25:10.072 "raid_level": "concat", 00:25:10.072 "superblock": true, 00:25:10.072 "num_base_bdevs": 2, 00:25:10.072 "num_base_bdevs_discovered": 1, 00:25:10.072 "num_base_bdevs_operational": 2, 00:25:10.072 "base_bdevs_list": [ 00:25:10.072 { 00:25:10.072 "name": "pt1", 00:25:10.072 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:10.072 "is_configured": true, 00:25:10.072 "data_offset": 2048, 00:25:10.072 "data_size": 63488 00:25:10.072 }, 00:25:10.072 { 00:25:10.072 "name": null, 00:25:10.072 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:10.072 "is_configured": false, 00:25:10.072 "data_offset": 2048, 00:25:10.072 "data_size": 63488 00:25:10.072 } 00:25:10.072 ] 00:25:10.072 }' 00:25:10.072 11:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.072 11:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.640 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:10.640 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:10.640 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:10.640 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:10.899 [2024-05-15 11:18:29.373110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:10.899 [2024-05-15 11:18:29.373238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.899 [2024-05-15 11:18:29.373293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:25:10.899 [2024-05-15 11:18:29.373323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.899 [2024-05-15 11:18:29.373710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.899 [2024-05-15 11:18:29.373750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:10.899 [2024-05-15 11:18:29.374076] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:10.899 [2024-05-15 11:18:29.374114] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:10.899 [2024-05-15 11:18:29.374204] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:25:10.899 [2024-05-15 11:18:29.374219] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:10.899 [2024-05-15 11:18:29.374307] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:10.899 [2024-05-15 11:18:29.374520] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:25:10.899 [2024-05-15 11:18:29.374536] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:25:10.899 [2024-05-15 11:18:29.374635] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.899 pt2 00:25:10.899 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:10.899 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:10.899 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:10.899 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.899 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.900 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.158 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.158 "name": "raid_bdev1", 00:25:11.158 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:11.158 "strip_size_kb": 64, 00:25:11.158 "state": "online", 00:25:11.158 "raid_level": "concat", 00:25:11.158 "superblock": true, 00:25:11.158 "num_base_bdevs": 2, 00:25:11.158 "num_base_bdevs_discovered": 2, 00:25:11.158 "num_base_bdevs_operational": 2, 00:25:11.158 "base_bdevs_list": [ 00:25:11.158 { 00:25:11.158 "name": "pt1", 00:25:11.158 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:11.158 "is_configured": true, 00:25:11.158 "data_offset": 2048, 00:25:11.158 "data_size": 63488 00:25:11.158 }, 00:25:11.158 { 00:25:11.158 "name": "pt2", 00:25:11.158 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:11.158 "is_configured": true, 00:25:11.158 "data_offset": 2048, 00:25:11.158 "data_size": 63488 00:25:11.158 } 00:25:11.158 ] 00:25:11.158 }' 00:25:11.158 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.158 11:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:11.726 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:11.984 [2024-05-15 11:18:30.449588] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.984 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:11.984 "name": "raid_bdev1", 00:25:11.984 "aliases": [ 00:25:11.984 "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e" 00:25:11.984 ], 00:25:11.984 "product_name": "Raid Volume", 00:25:11.984 "block_size": 512, 00:25:11.984 "num_blocks": 126976, 00:25:11.984 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:11.984 "assigned_rate_limits": { 00:25:11.984 "rw_ios_per_sec": 0, 00:25:11.984 "rw_mbytes_per_sec": 0, 00:25:11.984 "r_mbytes_per_sec": 0, 00:25:11.984 "w_mbytes_per_sec": 0 00:25:11.984 }, 00:25:11.984 "claimed": false, 00:25:11.984 "zoned": false, 00:25:11.984 "supported_io_types": { 00:25:11.984 "read": true, 00:25:11.984 "write": true, 00:25:11.984 "unmap": true, 00:25:11.984 "write_zeroes": true, 00:25:11.984 "flush": true, 00:25:11.984 "reset": true, 00:25:11.984 "compare": false, 00:25:11.984 "compare_and_write": false, 00:25:11.984 "abort": false, 00:25:11.984 "nvme_admin": false, 00:25:11.984 "nvme_io": false 00:25:11.984 }, 00:25:11.984 "memory_domains": [ 00:25:11.984 { 00:25:11.984 "dma_device_id": "system", 00:25:11.984 "dma_device_type": 1 00:25:11.985 }, 00:25:11.985 { 00:25:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.985 "dma_device_type": 2 00:25:11.985 }, 00:25:11.985 { 00:25:11.985 "dma_device_id": "system", 00:25:11.985 "dma_device_type": 1 00:25:11.985 }, 00:25:11.985 { 00:25:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.985 "dma_device_type": 2 00:25:11.985 } 00:25:11.985 ], 00:25:11.985 "driver_specific": { 00:25:11.985 "raid": { 00:25:11.985 "uuid": "7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e", 00:25:11.985 "strip_size_kb": 64, 00:25:11.985 "state": "online", 00:25:11.985 "raid_level": "concat", 00:25:11.985 "superblock": true, 00:25:11.985 "num_base_bdevs": 2, 00:25:11.985 "num_base_bdevs_discovered": 2, 00:25:11.985 "num_base_bdevs_operational": 2, 00:25:11.985 "base_bdevs_list": [ 00:25:11.985 { 00:25:11.985 "name": "pt1", 00:25:11.985 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:11.985 "is_configured": true, 00:25:11.985 "data_offset": 2048, 00:25:11.985 "data_size": 63488 00:25:11.985 }, 00:25:11.985 { 00:25:11.985 "name": "pt2", 00:25:11.985 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:11.985 "is_configured": true, 00:25:11.985 "data_offset": 2048, 00:25:11.985 "data_size": 63488 00:25:11.985 } 00:25:11.985 ] 00:25:11.985 } 00:25:11.985 } 00:25:11.985 }' 00:25:11.985 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.985 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:11.985 pt2' 00:25:11.985 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:11.985 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:11.985 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:12.243 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:12.243 "name": "pt1", 00:25:12.243 "aliases": [ 00:25:12.243 "8055d61a-e00e-5512-a7e6-8fd1f39bc371" 00:25:12.243 ], 00:25:12.243 "product_name": "passthru", 00:25:12.243 "block_size": 512, 00:25:12.243 "num_blocks": 65536, 00:25:12.243 "uuid": "8055d61a-e00e-5512-a7e6-8fd1f39bc371", 00:25:12.243 "assigned_rate_limits": { 00:25:12.243 "rw_ios_per_sec": 0, 00:25:12.243 "rw_mbytes_per_sec": 0, 00:25:12.243 "r_mbytes_per_sec": 0, 00:25:12.243 "w_mbytes_per_sec": 0 00:25:12.243 }, 00:25:12.243 "claimed": true, 00:25:12.243 "claim_type": "exclusive_write", 00:25:12.243 "zoned": false, 00:25:12.243 "supported_io_types": { 00:25:12.243 "read": true, 00:25:12.243 "write": true, 00:25:12.243 "unmap": true, 00:25:12.243 "write_zeroes": true, 00:25:12.243 "flush": true, 00:25:12.243 "reset": true, 00:25:12.243 "compare": false, 00:25:12.243 "compare_and_write": false, 00:25:12.243 "abort": true, 00:25:12.243 "nvme_admin": false, 00:25:12.243 "nvme_io": false 00:25:12.243 }, 00:25:12.243 "memory_domains": [ 00:25:12.243 { 00:25:12.243 "dma_device_id": "system", 00:25:12.243 "dma_device_type": 1 00:25:12.243 }, 00:25:12.243 { 00:25:12.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.243 "dma_device_type": 2 00:25:12.243 } 00:25:12.243 ], 00:25:12.243 "driver_specific": { 00:25:12.243 "passthru": { 00:25:12.243 "name": "pt1", 00:25:12.243 "base_bdev_name": "malloc1" 00:25:12.243 } 00:25:12.243 } 00:25:12.243 }' 00:25:12.243 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:12.243 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:12.501 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:12.501 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:12.501 11:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:12.501 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.501 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:12.501 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:12.501 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.501 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:12.759 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:12.759 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:12.759 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:12.759 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:12.759 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:13.030 "name": "pt2", 00:25:13.030 "aliases": [ 00:25:13.030 "c92a6986-535d-5326-8063-b94f42553ed5" 00:25:13.030 ], 00:25:13.030 "product_name": "passthru", 00:25:13.030 "block_size": 512, 00:25:13.030 "num_blocks": 65536, 00:25:13.030 "uuid": "c92a6986-535d-5326-8063-b94f42553ed5", 00:25:13.030 "assigned_rate_limits": { 00:25:13.030 "rw_ios_per_sec": 0, 00:25:13.030 "rw_mbytes_per_sec": 0, 00:25:13.030 "r_mbytes_per_sec": 0, 00:25:13.030 "w_mbytes_per_sec": 0 00:25:13.030 }, 00:25:13.030 "claimed": true, 00:25:13.030 "claim_type": "exclusive_write", 00:25:13.030 "zoned": false, 00:25:13.030 "supported_io_types": { 00:25:13.030 "read": true, 00:25:13.030 "write": true, 00:25:13.030 "unmap": true, 00:25:13.030 "write_zeroes": true, 00:25:13.030 "flush": true, 00:25:13.030 "reset": true, 00:25:13.030 "compare": false, 00:25:13.030 "compare_and_write": false, 00:25:13.030 "abort": true, 00:25:13.030 "nvme_admin": false, 00:25:13.030 "nvme_io": false 00:25:13.030 }, 00:25:13.030 "memory_domains": [ 00:25:13.030 { 00:25:13.030 "dma_device_id": "system", 00:25:13.030 "dma_device_type": 1 00:25:13.030 }, 00:25:13.030 { 00:25:13.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.030 "dma_device_type": 2 00:25:13.030 } 00:25:13.030 ], 00:25:13.030 "driver_specific": { 00:25:13.030 "passthru": { 00:25:13.030 "name": "pt2", 00:25:13.030 "base_bdev_name": "malloc2" 00:25:13.030 } 00:25:13.030 } 00:25:13.030 }' 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.030 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:13.288 11:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:13.546 [2024-05-15 11:18:32.081818] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e '!=' 7a1ca963-f8ed-4d5c-a42d-e6b40cc1024e ']' 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 55048 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 55048 ']' 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 55048 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55048 00:25:13.546 killing process with pid 55048 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55048' 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 55048 00:25:13.546 11:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 55048 00:25:13.546 [2024-05-15 11:18:32.117918] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:13.546 [2024-05-15 11:18:32.118006] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.546 [2024-05-15 11:18:32.118045] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:13.546 [2024-05-15 11:18:32.118059] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:25:13.803 [2024-05-15 11:18:32.289430] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:15.176 ************************************ 00:25:15.176 END TEST raid_superblock_test 00:25:15.176 ************************************ 00:25:15.176 11:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:25:15.176 00:25:15.176 real 0m11.674s 00:25:15.176 user 0m20.705s 00:25:15.176 sys 0m1.229s 00:25:15.176 11:18:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:15.176 11:18:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.176 11:18:33 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:25:15.176 11:18:33 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:25:15.176 11:18:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:15.176 11:18:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:15.176 11:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:15.176 ************************************ 00:25:15.176 START TEST raid_state_function_test 00:25:15.176 ************************************ 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:15.176 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:15.177 Process raid pid: 55424 00:25:15.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=55424 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55424' 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 55424 /var/tmp/spdk-raid.sock 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 55424 ']' 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:15.177 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.177 [2024-05-15 11:18:33.738905] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:25:15.177 [2024-05-15 11:18:33.739108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.435 [2024-05-15 11:18:33.905625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.693 [2024-05-15 11:18:34.142925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.951 [2024-05-15 11:18:34.342080] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.951 11:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:15.951 11:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:25:15.951 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:16.210 [2024-05-15 11:18:34.691752] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:16.210 [2024-05-15 11:18:34.691857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:16.210 [2024-05-15 11:18:34.691874] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:16.210 [2024-05-15 11:18:34.691896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.210 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.468 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.468 "name": "Existed_Raid", 00:25:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.468 "strip_size_kb": 0, 00:25:16.468 "state": "configuring", 00:25:16.468 "raid_level": "raid1", 00:25:16.468 "superblock": false, 00:25:16.468 "num_base_bdevs": 2, 00:25:16.468 "num_base_bdevs_discovered": 0, 00:25:16.468 "num_base_bdevs_operational": 2, 00:25:16.468 "base_bdevs_list": [ 00:25:16.468 { 00:25:16.468 "name": "BaseBdev1", 00:25:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.468 "is_configured": false, 00:25:16.468 "data_offset": 0, 00:25:16.468 "data_size": 0 00:25:16.468 }, 00:25:16.468 { 00:25:16.468 "name": "BaseBdev2", 00:25:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.468 "is_configured": false, 00:25:16.468 "data_offset": 0, 00:25:16.468 "data_size": 0 00:25:16.468 } 00:25:16.468 ] 00:25:16.468 }' 00:25:16.468 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.468 11:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.033 11:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:17.291 [2024-05-15 11:18:35.679824] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:17.291 [2024-05-15 11:18:35.679885] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:25:17.291 11:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:17.291 [2024-05-15 11:18:35.871853] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:17.291 [2024-05-15 11:18:35.871978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:17.291 [2024-05-15 11:18:35.871996] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:17.291 [2024-05-15 11:18:35.872024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:17.292 11:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:17.550 [2024-05-15 11:18:36.160577] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.550 BaseBdev1 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.550 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.809 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:18.067 [ 00:25:18.067 { 00:25:18.067 "name": "BaseBdev1", 00:25:18.067 "aliases": [ 00:25:18.067 "b49bf867-ef51-4f83-9879-d77a9f5172bd" 00:25:18.067 ], 00:25:18.067 "product_name": "Malloc disk", 00:25:18.067 "block_size": 512, 00:25:18.067 "num_blocks": 65536, 00:25:18.067 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:18.067 "assigned_rate_limits": { 00:25:18.067 "rw_ios_per_sec": 0, 00:25:18.067 "rw_mbytes_per_sec": 0, 00:25:18.067 "r_mbytes_per_sec": 0, 00:25:18.067 "w_mbytes_per_sec": 0 00:25:18.067 }, 00:25:18.067 "claimed": true, 00:25:18.067 "claim_type": "exclusive_write", 00:25:18.067 "zoned": false, 00:25:18.067 "supported_io_types": { 00:25:18.067 "read": true, 00:25:18.067 "write": true, 00:25:18.067 "unmap": true, 00:25:18.067 "write_zeroes": true, 00:25:18.067 "flush": true, 00:25:18.067 "reset": true, 00:25:18.067 "compare": false, 00:25:18.067 "compare_and_write": false, 00:25:18.067 "abort": true, 00:25:18.068 "nvme_admin": false, 00:25:18.068 "nvme_io": false 00:25:18.068 }, 00:25:18.068 "memory_domains": [ 00:25:18.068 { 00:25:18.068 "dma_device_id": "system", 00:25:18.068 "dma_device_type": 1 00:25:18.068 }, 00:25:18.068 { 00:25:18.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.068 "dma_device_type": 2 00:25:18.068 } 00:25:18.068 ], 00:25:18.068 "driver_specific": {} 00:25:18.068 } 00:25:18.068 ] 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.068 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.325 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.325 "name": "Existed_Raid", 00:25:18.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.325 "strip_size_kb": 0, 00:25:18.325 "state": "configuring", 00:25:18.325 "raid_level": "raid1", 00:25:18.325 "superblock": false, 00:25:18.325 "num_base_bdevs": 2, 00:25:18.325 "num_base_bdevs_discovered": 1, 00:25:18.325 "num_base_bdevs_operational": 2, 00:25:18.325 "base_bdevs_list": [ 00:25:18.325 { 00:25:18.325 "name": "BaseBdev1", 00:25:18.325 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:18.325 "is_configured": true, 00:25:18.325 "data_offset": 0, 00:25:18.325 "data_size": 65536 00:25:18.325 }, 00:25:18.325 { 00:25:18.325 "name": "BaseBdev2", 00:25:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.326 "is_configured": false, 00:25:18.326 "data_offset": 0, 00:25:18.326 "data_size": 0 00:25:18.326 } 00:25:18.326 ] 00:25:18.326 }' 00:25:18.326 11:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.326 11:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.890 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:19.148 [2024-05-15 11:18:37.676803] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:19.148 [2024-05-15 11:18:37.676874] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:25:19.148 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:19.405 [2024-05-15 11:18:37.940866] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:19.405 [2024-05-15 11:18:37.942482] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:19.405 [2024-05-15 11:18:37.942548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.405 11:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.663 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.663 "name": "Existed_Raid", 00:25:19.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.663 "strip_size_kb": 0, 00:25:19.663 "state": "configuring", 00:25:19.663 "raid_level": "raid1", 00:25:19.663 "superblock": false, 00:25:19.663 "num_base_bdevs": 2, 00:25:19.663 "num_base_bdevs_discovered": 1, 00:25:19.663 "num_base_bdevs_operational": 2, 00:25:19.663 "base_bdevs_list": [ 00:25:19.663 { 00:25:19.663 "name": "BaseBdev1", 00:25:19.663 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:19.663 "is_configured": true, 00:25:19.663 "data_offset": 0, 00:25:19.663 "data_size": 65536 00:25:19.663 }, 00:25:19.663 { 00:25:19.663 "name": "BaseBdev2", 00:25:19.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.663 "is_configured": false, 00:25:19.663 "data_offset": 0, 00:25:19.663 "data_size": 0 00:25:19.663 } 00:25:19.663 ] 00:25:19.663 }' 00:25:19.663 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.663 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.598 [2024-05-15 11:18:39.146239] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.598 [2024-05-15 11:18:39.146299] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:25:20.598 [2024-05-15 11:18:39.146321] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:20.598 [2024-05-15 11:18:39.146438] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:25:20.598 [2024-05-15 11:18:39.146669] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:25:20.598 [2024-05-15 11:18:39.146684] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:25:20.598 BaseBdev2 00:25:20.598 [2024-05-15 11:18:39.147164] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:20.598 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.856 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:21.115 [ 00:25:21.115 { 00:25:21.115 "name": "BaseBdev2", 00:25:21.115 "aliases": [ 00:25:21.115 "2340c9ca-1733-483d-9dad-edbb529e1a21" 00:25:21.115 ], 00:25:21.115 "product_name": "Malloc disk", 00:25:21.115 "block_size": 512, 00:25:21.115 "num_blocks": 65536, 00:25:21.115 "uuid": "2340c9ca-1733-483d-9dad-edbb529e1a21", 00:25:21.115 "assigned_rate_limits": { 00:25:21.115 "rw_ios_per_sec": 0, 00:25:21.115 "rw_mbytes_per_sec": 0, 00:25:21.115 "r_mbytes_per_sec": 0, 00:25:21.115 "w_mbytes_per_sec": 0 00:25:21.115 }, 00:25:21.115 "claimed": true, 00:25:21.115 "claim_type": "exclusive_write", 00:25:21.115 "zoned": false, 00:25:21.115 "supported_io_types": { 00:25:21.115 "read": true, 00:25:21.115 "write": true, 00:25:21.115 "unmap": true, 00:25:21.115 "write_zeroes": true, 00:25:21.115 "flush": true, 00:25:21.115 "reset": true, 00:25:21.115 "compare": false, 00:25:21.115 "compare_and_write": false, 00:25:21.115 "abort": true, 00:25:21.115 "nvme_admin": false, 00:25:21.115 "nvme_io": false 00:25:21.115 }, 00:25:21.115 "memory_domains": [ 00:25:21.115 { 00:25:21.115 "dma_device_id": "system", 00:25:21.115 "dma_device_type": 1 00:25:21.115 }, 00:25:21.115 { 00:25:21.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.115 "dma_device_type": 2 00:25:21.115 } 00:25:21.115 ], 00:25:21.115 "driver_specific": {} 00:25:21.115 } 00:25:21.115 ] 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.115 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.373 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.373 "name": "Existed_Raid", 00:25:21.373 "uuid": "fbbdb786-27ee-4811-b7b6-60e790d47c7c", 00:25:21.373 "strip_size_kb": 0, 00:25:21.373 "state": "online", 00:25:21.373 "raid_level": "raid1", 00:25:21.373 "superblock": false, 00:25:21.373 "num_base_bdevs": 2, 00:25:21.373 "num_base_bdevs_discovered": 2, 00:25:21.373 "num_base_bdevs_operational": 2, 00:25:21.373 "base_bdevs_list": [ 00:25:21.373 { 00:25:21.373 "name": "BaseBdev1", 00:25:21.373 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:21.373 "is_configured": true, 00:25:21.373 "data_offset": 0, 00:25:21.373 "data_size": 65536 00:25:21.373 }, 00:25:21.373 { 00:25:21.373 "name": "BaseBdev2", 00:25:21.373 "uuid": "2340c9ca-1733-483d-9dad-edbb529e1a21", 00:25:21.373 "is_configured": true, 00:25:21.373 "data_offset": 0, 00:25:21.373 "data_size": 65536 00:25:21.373 } 00:25:21.373 ] 00:25:21.373 }' 00:25:21.373 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.373 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:21.939 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:22.198 [2024-05-15 11:18:40.722648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:22.198 "name": "Existed_Raid", 00:25:22.198 "aliases": [ 00:25:22.198 "fbbdb786-27ee-4811-b7b6-60e790d47c7c" 00:25:22.198 ], 00:25:22.198 "product_name": "Raid Volume", 00:25:22.198 "block_size": 512, 00:25:22.198 "num_blocks": 65536, 00:25:22.198 "uuid": "fbbdb786-27ee-4811-b7b6-60e790d47c7c", 00:25:22.198 "assigned_rate_limits": { 00:25:22.198 "rw_ios_per_sec": 0, 00:25:22.198 "rw_mbytes_per_sec": 0, 00:25:22.198 "r_mbytes_per_sec": 0, 00:25:22.198 "w_mbytes_per_sec": 0 00:25:22.198 }, 00:25:22.198 "claimed": false, 00:25:22.198 "zoned": false, 00:25:22.198 "supported_io_types": { 00:25:22.198 "read": true, 00:25:22.198 "write": true, 00:25:22.198 "unmap": false, 00:25:22.198 "write_zeroes": true, 00:25:22.198 "flush": false, 00:25:22.198 "reset": true, 00:25:22.198 "compare": false, 00:25:22.198 "compare_and_write": false, 00:25:22.198 "abort": false, 00:25:22.198 "nvme_admin": false, 00:25:22.198 "nvme_io": false 00:25:22.198 }, 00:25:22.198 "memory_domains": [ 00:25:22.198 { 00:25:22.198 "dma_device_id": "system", 00:25:22.198 "dma_device_type": 1 00:25:22.198 }, 00:25:22.198 { 00:25:22.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.198 "dma_device_type": 2 00:25:22.198 }, 00:25:22.198 { 00:25:22.198 "dma_device_id": "system", 00:25:22.198 "dma_device_type": 1 00:25:22.198 }, 00:25:22.198 { 00:25:22.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.198 "dma_device_type": 2 00:25:22.198 } 00:25:22.198 ], 00:25:22.198 "driver_specific": { 00:25:22.198 "raid": { 00:25:22.198 "uuid": "fbbdb786-27ee-4811-b7b6-60e790d47c7c", 00:25:22.198 "strip_size_kb": 0, 00:25:22.198 "state": "online", 00:25:22.198 "raid_level": "raid1", 00:25:22.198 "superblock": false, 00:25:22.198 "num_base_bdevs": 2, 00:25:22.198 "num_base_bdevs_discovered": 2, 00:25:22.198 "num_base_bdevs_operational": 2, 00:25:22.198 "base_bdevs_list": [ 00:25:22.198 { 00:25:22.198 "name": "BaseBdev1", 00:25:22.198 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:22.198 "is_configured": true, 00:25:22.198 "data_offset": 0, 00:25:22.198 "data_size": 65536 00:25:22.198 }, 00:25:22.198 { 00:25:22.198 "name": "BaseBdev2", 00:25:22.198 "uuid": "2340c9ca-1733-483d-9dad-edbb529e1a21", 00:25:22.198 "is_configured": true, 00:25:22.198 "data_offset": 0, 00:25:22.198 "data_size": 65536 00:25:22.198 } 00:25:22.198 ] 00:25:22.198 } 00:25:22.198 } 00:25:22.198 }' 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:25:22.198 BaseBdev2' 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:22.198 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:22.456 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:22.456 "name": "BaseBdev1", 00:25:22.456 "aliases": [ 00:25:22.456 "b49bf867-ef51-4f83-9879-d77a9f5172bd" 00:25:22.456 ], 00:25:22.456 "product_name": "Malloc disk", 00:25:22.456 "block_size": 512, 00:25:22.456 "num_blocks": 65536, 00:25:22.456 "uuid": "b49bf867-ef51-4f83-9879-d77a9f5172bd", 00:25:22.456 "assigned_rate_limits": { 00:25:22.456 "rw_ios_per_sec": 0, 00:25:22.456 "rw_mbytes_per_sec": 0, 00:25:22.456 "r_mbytes_per_sec": 0, 00:25:22.456 "w_mbytes_per_sec": 0 00:25:22.456 }, 00:25:22.456 "claimed": true, 00:25:22.456 "claim_type": "exclusive_write", 00:25:22.456 "zoned": false, 00:25:22.456 "supported_io_types": { 00:25:22.456 "read": true, 00:25:22.456 "write": true, 00:25:22.456 "unmap": true, 00:25:22.456 "write_zeroes": true, 00:25:22.456 "flush": true, 00:25:22.456 "reset": true, 00:25:22.456 "compare": false, 00:25:22.456 "compare_and_write": false, 00:25:22.456 "abort": true, 00:25:22.456 "nvme_admin": false, 00:25:22.456 "nvme_io": false 00:25:22.457 }, 00:25:22.457 "memory_domains": [ 00:25:22.457 { 00:25:22.457 "dma_device_id": "system", 00:25:22.457 "dma_device_type": 1 00:25:22.457 }, 00:25:22.457 { 00:25:22.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.457 "dma_device_type": 2 00:25:22.457 } 00:25:22.457 ], 00:25:22.457 "driver_specific": {} 00:25:22.457 }' 00:25:22.457 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:22.457 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:22.714 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:22.714 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:22.714 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:22.714 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.715 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:22.715 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:22.715 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.715 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:22.972 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:22.972 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:22.972 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:22.972 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:22.972 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:23.230 "name": "BaseBdev2", 00:25:23.230 "aliases": [ 00:25:23.230 "2340c9ca-1733-483d-9dad-edbb529e1a21" 00:25:23.230 ], 00:25:23.230 "product_name": "Malloc disk", 00:25:23.230 "block_size": 512, 00:25:23.230 "num_blocks": 65536, 00:25:23.230 "uuid": "2340c9ca-1733-483d-9dad-edbb529e1a21", 00:25:23.230 "assigned_rate_limits": { 00:25:23.230 "rw_ios_per_sec": 0, 00:25:23.230 "rw_mbytes_per_sec": 0, 00:25:23.230 "r_mbytes_per_sec": 0, 00:25:23.230 "w_mbytes_per_sec": 0 00:25:23.230 }, 00:25:23.230 "claimed": true, 00:25:23.230 "claim_type": "exclusive_write", 00:25:23.230 "zoned": false, 00:25:23.230 "supported_io_types": { 00:25:23.230 "read": true, 00:25:23.230 "write": true, 00:25:23.230 "unmap": true, 00:25:23.230 "write_zeroes": true, 00:25:23.230 "flush": true, 00:25:23.230 "reset": true, 00:25:23.230 "compare": false, 00:25:23.230 "compare_and_write": false, 00:25:23.230 "abort": true, 00:25:23.230 "nvme_admin": false, 00:25:23.230 "nvme_io": false 00:25:23.230 }, 00:25:23.230 "memory_domains": [ 00:25:23.230 { 00:25:23.230 "dma_device_id": "system", 00:25:23.230 "dma_device_type": 1 00:25:23.230 }, 00:25:23.230 { 00:25:23.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.230 "dma_device_type": 2 00:25:23.230 } 00:25:23.230 ], 00:25:23.230 "driver_specific": {} 00:25:23.230 }' 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:23.230 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:23.488 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:23.488 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:23.488 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:23.488 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:23.488 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:23.488 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:23.488 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:23.488 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:23.745 [2024-05-15 11:18:42.254729] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:23.745 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:23.745 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:25:23.745 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:23.745 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.746 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.003 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.003 "name": "Existed_Raid", 00:25:24.003 "uuid": "fbbdb786-27ee-4811-b7b6-60e790d47c7c", 00:25:24.003 "strip_size_kb": 0, 00:25:24.003 "state": "online", 00:25:24.003 "raid_level": "raid1", 00:25:24.003 "superblock": false, 00:25:24.003 "num_base_bdevs": 2, 00:25:24.003 "num_base_bdevs_discovered": 1, 00:25:24.003 "num_base_bdevs_operational": 1, 00:25:24.003 "base_bdevs_list": [ 00:25:24.003 { 00:25:24.003 "name": null, 00:25:24.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.003 "is_configured": false, 00:25:24.003 "data_offset": 0, 00:25:24.003 "data_size": 65536 00:25:24.003 }, 00:25:24.003 { 00:25:24.003 "name": "BaseBdev2", 00:25:24.003 "uuid": "2340c9ca-1733-483d-9dad-edbb529e1a21", 00:25:24.003 "is_configured": true, 00:25:24.003 "data_offset": 0, 00:25:24.003 "data_size": 65536 00:25:24.003 } 00:25:24.003 ] 00:25:24.003 }' 00:25:24.003 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.003 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:24.937 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:25.194 [2024-05-15 11:18:43.695430] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:25.194 [2024-05-15 11:18:43.695516] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.194 [2024-05-15 11:18:43.783074] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.194 [2024-05-15 11:18:43.783195] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.194 [2024-05-15 11:18:43.783212] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:25:25.194 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:25.194 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:25.194 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:25.194 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 55424 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 55424 ']' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 55424 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55424 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:25.453 killing process with pid 55424 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55424' 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 55424 00:25:25.453 11:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 55424 00:25:25.453 [2024-05-15 11:18:44.075163] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.453 [2024-05-15 11:18:44.075291] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:26.827 ************************************ 00:25:26.827 END TEST raid_state_function_test 00:25:26.827 ************************************ 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:25:26.827 00:25:26.827 real 0m11.709s 00:25:26.827 user 0m20.700s 00:25:26.827 sys 0m1.219s 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.827 11:18:45 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:25:26.827 11:18:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:26.827 11:18:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:26.827 11:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:26.827 ************************************ 00:25:26.827 START TEST raid_state_function_test_sb 00:25:26.827 ************************************ 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.827 Process raid pid: 55808 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=55808 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55808' 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 55808 /var/tmp/spdk-raid.sock 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 55808 ']' 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:26.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:26.827 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.085 [2024-05-15 11:18:45.512072] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:25:27.085 [2024-05-15 11:18:45.512275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.085 [2024-05-15 11:18:45.680463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.342 [2024-05-15 11:18:45.939293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.600 [2024-05-15 11:18:46.151717] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.857 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:27.857 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:25:27.857 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:28.115 [2024-05-15 11:18:46.596274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.115 [2024-05-15 11:18:46.596387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.115 [2024-05-15 11:18:46.596404] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.115 [2024-05-15 11:18:46.596426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.115 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.372 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.372 "name": "Existed_Raid", 00:25:28.372 "uuid": "ab3ca527-48af-4a52-9da2-90c576108b41", 00:25:28.372 "strip_size_kb": 0, 00:25:28.372 "state": "configuring", 00:25:28.372 "raid_level": "raid1", 00:25:28.372 "superblock": true, 00:25:28.372 "num_base_bdevs": 2, 00:25:28.372 "num_base_bdevs_discovered": 0, 00:25:28.372 "num_base_bdevs_operational": 2, 00:25:28.372 "base_bdevs_list": [ 00:25:28.372 { 00:25:28.372 "name": "BaseBdev1", 00:25:28.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.372 "is_configured": false, 00:25:28.372 "data_offset": 0, 00:25:28.372 "data_size": 0 00:25:28.372 }, 00:25:28.372 { 00:25:28.372 "name": "BaseBdev2", 00:25:28.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.372 "is_configured": false, 00:25:28.372 "data_offset": 0, 00:25:28.372 "data_size": 0 00:25:28.372 } 00:25:28.372 ] 00:25:28.372 }' 00:25:28.372 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.372 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.306 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.306 [2024-05-15 11:18:47.784203] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.306 [2024-05-15 11:18:47.784281] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:25:29.306 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:29.564 [2024-05-15 11:18:47.984285] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.564 [2024-05-15 11:18:47.984439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.564 [2024-05-15 11:18:47.984456] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.564 [2024-05-15 11:18:47.984485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.564 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:29.822 [2024-05-15 11:18:48.235732] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.822 BaseBdev1 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.822 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:30.081 [ 00:25:30.081 { 00:25:30.081 "name": "BaseBdev1", 00:25:30.081 "aliases": [ 00:25:30.081 "58442fbc-a41a-4690-9f77-017531f155d6" 00:25:30.081 ], 00:25:30.081 "product_name": "Malloc disk", 00:25:30.081 "block_size": 512, 00:25:30.081 "num_blocks": 65536, 00:25:30.081 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:30.081 "assigned_rate_limits": { 00:25:30.081 "rw_ios_per_sec": 0, 00:25:30.081 "rw_mbytes_per_sec": 0, 00:25:30.081 "r_mbytes_per_sec": 0, 00:25:30.081 "w_mbytes_per_sec": 0 00:25:30.081 }, 00:25:30.081 "claimed": true, 00:25:30.081 "claim_type": "exclusive_write", 00:25:30.081 "zoned": false, 00:25:30.081 "supported_io_types": { 00:25:30.081 "read": true, 00:25:30.081 "write": true, 00:25:30.081 "unmap": true, 00:25:30.081 "write_zeroes": true, 00:25:30.081 "flush": true, 00:25:30.081 "reset": true, 00:25:30.081 "compare": false, 00:25:30.081 "compare_and_write": false, 00:25:30.081 "abort": true, 00:25:30.081 "nvme_admin": false, 00:25:30.081 "nvme_io": false 00:25:30.081 }, 00:25:30.081 "memory_domains": [ 00:25:30.081 { 00:25:30.081 "dma_device_id": "system", 00:25:30.081 "dma_device_type": 1 00:25:30.081 }, 00:25:30.081 { 00:25:30.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.081 "dma_device_type": 2 00:25:30.081 } 00:25:30.081 ], 00:25:30.081 "driver_specific": {} 00:25:30.081 } 00:25:30.081 ] 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.081 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.340 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.340 "name": "Existed_Raid", 00:25:30.340 "uuid": "2728a799-4cc2-4ccb-8a55-8201ad0eaaab", 00:25:30.340 "strip_size_kb": 0, 00:25:30.340 "state": "configuring", 00:25:30.340 "raid_level": "raid1", 00:25:30.340 "superblock": true, 00:25:30.340 "num_base_bdevs": 2, 00:25:30.340 "num_base_bdevs_discovered": 1, 00:25:30.340 "num_base_bdevs_operational": 2, 00:25:30.340 "base_bdevs_list": [ 00:25:30.340 { 00:25:30.340 "name": "BaseBdev1", 00:25:30.340 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:30.340 "is_configured": true, 00:25:30.340 "data_offset": 2048, 00:25:30.340 "data_size": 63488 00:25:30.340 }, 00:25:30.340 { 00:25:30.340 "name": "BaseBdev2", 00:25:30.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.340 "is_configured": false, 00:25:30.340 "data_offset": 0, 00:25:30.340 "data_size": 0 00:25:30.340 } 00:25:30.340 ] 00:25:30.340 }' 00:25:30.340 11:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.340 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.275 11:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:31.275 [2024-05-15 11:18:49.815978] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:31.275 [2024-05-15 11:18:49.816045] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:25:31.275 11:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:31.533 [2024-05-15 11:18:50.076070] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.533 [2024-05-15 11:18:50.077704] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:31.533 [2024-05-15 11:18:50.077775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.533 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.791 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.791 "name": "Existed_Raid", 00:25:31.791 "uuid": "49515520-35e8-4a01-9094-a7114c052778", 00:25:31.791 "strip_size_kb": 0, 00:25:31.791 "state": "configuring", 00:25:31.791 "raid_level": "raid1", 00:25:31.791 "superblock": true, 00:25:31.791 "num_base_bdevs": 2, 00:25:31.791 "num_base_bdevs_discovered": 1, 00:25:31.791 "num_base_bdevs_operational": 2, 00:25:31.791 "base_bdevs_list": [ 00:25:31.791 { 00:25:31.791 "name": "BaseBdev1", 00:25:31.791 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:31.791 "is_configured": true, 00:25:31.791 "data_offset": 2048, 00:25:31.791 "data_size": 63488 00:25:31.791 }, 00:25:31.791 { 00:25:31.791 "name": "BaseBdev2", 00:25:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.791 "is_configured": false, 00:25:31.791 "data_offset": 0, 00:25:31.791 "data_size": 0 00:25:31.791 } 00:25:31.791 ] 00:25:31.791 }' 00:25:31.791 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.791 11:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.357 11:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:32.615 [2024-05-15 11:18:51.129394] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.615 [2024-05-15 11:18:51.129584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:25:32.615 [2024-05-15 11:18:51.129612] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:32.615 [2024-05-15 11:18:51.129715] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:25:32.615 BaseBdev2 00:25:32.615 [2024-05-15 11:18:51.130568] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:25:32.615 [2024-05-15 11:18:51.130590] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:25:32.615 [2024-05-15 11:18:51.130719] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:32.615 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:32.872 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:33.130 [ 00:25:33.130 { 00:25:33.130 "name": "BaseBdev2", 00:25:33.130 "aliases": [ 00:25:33.130 "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae" 00:25:33.130 ], 00:25:33.130 "product_name": "Malloc disk", 00:25:33.130 "block_size": 512, 00:25:33.130 "num_blocks": 65536, 00:25:33.130 "uuid": "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae", 00:25:33.130 "assigned_rate_limits": { 00:25:33.130 "rw_ios_per_sec": 0, 00:25:33.130 "rw_mbytes_per_sec": 0, 00:25:33.130 "r_mbytes_per_sec": 0, 00:25:33.130 "w_mbytes_per_sec": 0 00:25:33.130 }, 00:25:33.130 "claimed": true, 00:25:33.130 "claim_type": "exclusive_write", 00:25:33.130 "zoned": false, 00:25:33.130 "supported_io_types": { 00:25:33.130 "read": true, 00:25:33.130 "write": true, 00:25:33.130 "unmap": true, 00:25:33.130 "write_zeroes": true, 00:25:33.130 "flush": true, 00:25:33.130 "reset": true, 00:25:33.130 "compare": false, 00:25:33.130 "compare_and_write": false, 00:25:33.130 "abort": true, 00:25:33.130 "nvme_admin": false, 00:25:33.130 "nvme_io": false 00:25:33.130 }, 00:25:33.130 "memory_domains": [ 00:25:33.130 { 00:25:33.130 "dma_device_id": "system", 00:25:33.130 "dma_device_type": 1 00:25:33.130 }, 00:25:33.130 { 00:25:33.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.130 "dma_device_type": 2 00:25:33.130 } 00:25:33.130 ], 00:25:33.130 "driver_specific": {} 00:25:33.130 } 00:25:33.130 ] 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.130 "name": "Existed_Raid", 00:25:33.130 "uuid": "49515520-35e8-4a01-9094-a7114c052778", 00:25:33.130 "strip_size_kb": 0, 00:25:33.130 "state": "online", 00:25:33.130 "raid_level": "raid1", 00:25:33.130 "superblock": true, 00:25:33.130 "num_base_bdevs": 2, 00:25:33.130 "num_base_bdevs_discovered": 2, 00:25:33.130 "num_base_bdevs_operational": 2, 00:25:33.130 "base_bdevs_list": [ 00:25:33.130 { 00:25:33.130 "name": "BaseBdev1", 00:25:33.130 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:33.130 "is_configured": true, 00:25:33.130 "data_offset": 2048, 00:25:33.130 "data_size": 63488 00:25:33.130 }, 00:25:33.130 { 00:25:33.130 "name": "BaseBdev2", 00:25:33.130 "uuid": "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae", 00:25:33.130 "is_configured": true, 00:25:33.130 "data_offset": 2048, 00:25:33.130 "data_size": 63488 00:25:33.130 } 00:25:33.130 ] 00:25:33.130 }' 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.130 11:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:34.064 [2024-05-15 11:18:52.613822] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:34.064 "name": "Existed_Raid", 00:25:34.064 "aliases": [ 00:25:34.064 "49515520-35e8-4a01-9094-a7114c052778" 00:25:34.064 ], 00:25:34.064 "product_name": "Raid Volume", 00:25:34.064 "block_size": 512, 00:25:34.064 "num_blocks": 63488, 00:25:34.064 "uuid": "49515520-35e8-4a01-9094-a7114c052778", 00:25:34.064 "assigned_rate_limits": { 00:25:34.064 "rw_ios_per_sec": 0, 00:25:34.064 "rw_mbytes_per_sec": 0, 00:25:34.064 "r_mbytes_per_sec": 0, 00:25:34.064 "w_mbytes_per_sec": 0 00:25:34.064 }, 00:25:34.064 "claimed": false, 00:25:34.064 "zoned": false, 00:25:34.064 "supported_io_types": { 00:25:34.064 "read": true, 00:25:34.064 "write": true, 00:25:34.064 "unmap": false, 00:25:34.064 "write_zeroes": true, 00:25:34.064 "flush": false, 00:25:34.064 "reset": true, 00:25:34.064 "compare": false, 00:25:34.064 "compare_and_write": false, 00:25:34.064 "abort": false, 00:25:34.064 "nvme_admin": false, 00:25:34.064 "nvme_io": false 00:25:34.064 }, 00:25:34.064 "memory_domains": [ 00:25:34.064 { 00:25:34.064 "dma_device_id": "system", 00:25:34.064 "dma_device_type": 1 00:25:34.064 }, 00:25:34.064 { 00:25:34.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.064 "dma_device_type": 2 00:25:34.064 }, 00:25:34.064 { 00:25:34.064 "dma_device_id": "system", 00:25:34.064 "dma_device_type": 1 00:25:34.064 }, 00:25:34.064 { 00:25:34.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.064 "dma_device_type": 2 00:25:34.064 } 00:25:34.064 ], 00:25:34.064 "driver_specific": { 00:25:34.064 "raid": { 00:25:34.064 "uuid": "49515520-35e8-4a01-9094-a7114c052778", 00:25:34.064 "strip_size_kb": 0, 00:25:34.064 "state": "online", 00:25:34.064 "raid_level": "raid1", 00:25:34.064 "superblock": true, 00:25:34.064 "num_base_bdevs": 2, 00:25:34.064 "num_base_bdevs_discovered": 2, 00:25:34.064 "num_base_bdevs_operational": 2, 00:25:34.064 "base_bdevs_list": [ 00:25:34.064 { 00:25:34.064 "name": "BaseBdev1", 00:25:34.064 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:34.064 "is_configured": true, 00:25:34.064 "data_offset": 2048, 00:25:34.064 "data_size": 63488 00:25:34.064 }, 00:25:34.064 { 00:25:34.064 "name": "BaseBdev2", 00:25:34.064 "uuid": "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae", 00:25:34.064 "is_configured": true, 00:25:34.064 "data_offset": 2048, 00:25:34.064 "data_size": 63488 00:25:34.064 } 00:25:34.064 ] 00:25:34.064 } 00:25:34.064 } 00:25:34.064 }' 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:25:34.064 BaseBdev2' 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:34.064 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.630 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.630 "name": "BaseBdev1", 00:25:34.630 "aliases": [ 00:25:34.630 "58442fbc-a41a-4690-9f77-017531f155d6" 00:25:34.630 ], 00:25:34.630 "product_name": "Malloc disk", 00:25:34.630 "block_size": 512, 00:25:34.630 "num_blocks": 65536, 00:25:34.630 "uuid": "58442fbc-a41a-4690-9f77-017531f155d6", 00:25:34.630 "assigned_rate_limits": { 00:25:34.630 "rw_ios_per_sec": 0, 00:25:34.630 "rw_mbytes_per_sec": 0, 00:25:34.630 "r_mbytes_per_sec": 0, 00:25:34.630 "w_mbytes_per_sec": 0 00:25:34.630 }, 00:25:34.630 "claimed": true, 00:25:34.630 "claim_type": "exclusive_write", 00:25:34.630 "zoned": false, 00:25:34.630 "supported_io_types": { 00:25:34.630 "read": true, 00:25:34.630 "write": true, 00:25:34.630 "unmap": true, 00:25:34.630 "write_zeroes": true, 00:25:34.630 "flush": true, 00:25:34.630 "reset": true, 00:25:34.630 "compare": false, 00:25:34.630 "compare_and_write": false, 00:25:34.630 "abort": true, 00:25:34.630 "nvme_admin": false, 00:25:34.630 "nvme_io": false 00:25:34.630 }, 00:25:34.630 "memory_domains": [ 00:25:34.630 { 00:25:34.630 "dma_device_id": "system", 00:25:34.630 "dma_device_type": 1 00:25:34.630 }, 00:25:34.630 { 00:25:34.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.630 "dma_device_type": 2 00:25:34.630 } 00:25:34.630 ], 00:25:34.630 "driver_specific": {} 00:25:34.630 }' 00:25:34.630 11:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.630 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.887 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:35.145 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:35.145 "name": "BaseBdev2", 00:25:35.145 "aliases": [ 00:25:35.145 "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae" 00:25:35.145 ], 00:25:35.145 "product_name": "Malloc disk", 00:25:35.145 "block_size": 512, 00:25:35.145 "num_blocks": 65536, 00:25:35.145 "uuid": "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae", 00:25:35.145 "assigned_rate_limits": { 00:25:35.145 "rw_ios_per_sec": 0, 00:25:35.145 "rw_mbytes_per_sec": 0, 00:25:35.145 "r_mbytes_per_sec": 0, 00:25:35.145 "w_mbytes_per_sec": 0 00:25:35.145 }, 00:25:35.145 "claimed": true, 00:25:35.145 "claim_type": "exclusive_write", 00:25:35.145 "zoned": false, 00:25:35.145 "supported_io_types": { 00:25:35.145 "read": true, 00:25:35.145 "write": true, 00:25:35.145 "unmap": true, 00:25:35.145 "write_zeroes": true, 00:25:35.145 "flush": true, 00:25:35.145 "reset": true, 00:25:35.145 "compare": false, 00:25:35.145 "compare_and_write": false, 00:25:35.145 "abort": true, 00:25:35.145 "nvme_admin": false, 00:25:35.145 "nvme_io": false 00:25:35.145 }, 00:25:35.145 "memory_domains": [ 00:25:35.145 { 00:25:35.145 "dma_device_id": "system", 00:25:35.145 "dma_device_type": 1 00:25:35.145 }, 00:25:35.145 { 00:25:35.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.145 "dma_device_type": 2 00:25:35.145 } 00:25:35.145 ], 00:25:35.145 "driver_specific": {} 00:25:35.145 }' 00:25:35.145 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:35.145 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:35.145 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:35.145 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.401 11:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:35.401 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:35.659 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:35.659 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:35.659 [2024-05-15 11:18:54.259978] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:25:35.917 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.918 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.176 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:36.176 "name": "Existed_Raid", 00:25:36.176 "uuid": "49515520-35e8-4a01-9094-a7114c052778", 00:25:36.176 "strip_size_kb": 0, 00:25:36.176 "state": "online", 00:25:36.176 "raid_level": "raid1", 00:25:36.176 "superblock": true, 00:25:36.176 "num_base_bdevs": 2, 00:25:36.176 "num_base_bdevs_discovered": 1, 00:25:36.176 "num_base_bdevs_operational": 1, 00:25:36.176 "base_bdevs_list": [ 00:25:36.176 { 00:25:36.176 "name": null, 00:25:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.176 "is_configured": false, 00:25:36.176 "data_offset": 2048, 00:25:36.176 "data_size": 63488 00:25:36.176 }, 00:25:36.176 { 00:25:36.176 "name": "BaseBdev2", 00:25:36.176 "uuid": "8cf9bdff-5c05-4dca-8bc1-b8ab16ae0bae", 00:25:36.176 "is_configured": true, 00:25:36.176 "data_offset": 2048, 00:25:36.176 "data_size": 63488 00:25:36.176 } 00:25:36.176 ] 00:25:36.176 }' 00:25:36.176 11:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:36.176 11:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.741 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:36.741 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.741 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.741 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:37.000 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:37.000 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:37.000 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:37.259 [2024-05-15 11:18:55.753405] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:37.259 [2024-05-15 11:18:55.753501] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.259 [2024-05-15 11:18:55.840575] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.259 [2024-05-15 11:18:55.840697] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.259 [2024-05-15 11:18:55.840713] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:25:37.259 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:37.259 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:37.259 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.259 11:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 55808 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 55808 ']' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 55808 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55808 00:25:37.517 killing process with pid 55808 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55808' 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 55808 00:25:37.517 11:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 55808 00:25:37.517 [2024-05-15 11:18:56.089045] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.517 [2024-05-15 11:18:56.089154] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:38.892 ************************************ 00:25:38.892 END TEST raid_state_function_test_sb 00:25:38.892 ************************************ 00:25:38.892 11:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:25:38.892 00:25:38.892 real 0m11.987s 00:25:38.892 user 0m21.114s 00:25:38.892 sys 0m1.294s 00:25:38.892 11:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:38.892 11:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.892 11:18:57 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:25:38.892 11:18:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:38.892 11:18:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:38.892 11:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.892 ************************************ 00:25:38.892 START TEST raid_superblock_test 00:25:38.892 ************************************ 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:38.892 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:38.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=56190 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 56190 /var/tmp/spdk-raid.sock 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 56190 ']' 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:38.893 11:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.150 [2024-05-15 11:18:57.531754] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:25:39.150 [2024-05-15 11:18:57.531988] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56190 ] 00:25:39.150 [2024-05-15 11:18:57.703194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.408 [2024-05-15 11:18:58.001600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.665 [2024-05-15 11:18:58.202575] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:39.923 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:40.181 malloc1 00:25:40.181 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:40.181 [2024-05-15 11:18:58.811048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:40.181 [2024-05-15 11:18:58.811161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.181 [2024-05-15 11:18:58.811221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:25:40.181 [2024-05-15 11:18:58.811266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.181 [2024-05-15 11:18:58.813849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.181 [2024-05-15 11:18:58.813901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:40.181 pt1 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:40.440 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:40.698 malloc2 00:25:40.698 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:40.698 [2024-05-15 11:18:59.321751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:40.698 [2024-05-15 11:18:59.322217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.698 [2024-05-15 11:18:59.322281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:25:40.698 [2024-05-15 11:18:59.322337] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.698 pt2 00:25:40.698 [2024-05-15 11:18:59.324219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.698 [2024-05-15 11:18:59.324281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:40.956 [2024-05-15 11:18:59.513854] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:40.956 [2024-05-15 11:18:59.515624] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:40.956 [2024-05-15 11:18:59.515993] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:25:40.956 [2024-05-15 11:18:59.516015] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:40.956 [2024-05-15 11:18:59.516159] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:40.956 [2024-05-15 11:18:59.516442] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:25:40.956 [2024-05-15 11:18:59.516458] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:25:40.956 [2024-05-15 11:18:59.516572] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.956 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.214 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:41.214 "name": "raid_bdev1", 00:25:41.214 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:41.214 "strip_size_kb": 0, 00:25:41.214 "state": "online", 00:25:41.214 "raid_level": "raid1", 00:25:41.214 "superblock": true, 00:25:41.214 "num_base_bdevs": 2, 00:25:41.214 "num_base_bdevs_discovered": 2, 00:25:41.214 "num_base_bdevs_operational": 2, 00:25:41.214 "base_bdevs_list": [ 00:25:41.214 { 00:25:41.214 "name": "pt1", 00:25:41.214 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:41.214 "is_configured": true, 00:25:41.214 "data_offset": 2048, 00:25:41.214 "data_size": 63488 00:25:41.214 }, 00:25:41.214 { 00:25:41.214 "name": "pt2", 00:25:41.214 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:41.214 "is_configured": true, 00:25:41.214 "data_offset": 2048, 00:25:41.214 "data_size": 63488 00:25:41.214 } 00:25:41.214 ] 00:25:41.214 }' 00:25:41.214 11:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:41.214 11:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:41.781 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:42.040 [2024-05-15 11:19:00.498186] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:42.040 "name": "raid_bdev1", 00:25:42.040 "aliases": [ 00:25:42.040 "a5507e07-eb47-425e-b862-1a601d0bde94" 00:25:42.040 ], 00:25:42.040 "product_name": "Raid Volume", 00:25:42.040 "block_size": 512, 00:25:42.040 "num_blocks": 63488, 00:25:42.040 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:42.040 "assigned_rate_limits": { 00:25:42.040 "rw_ios_per_sec": 0, 00:25:42.040 "rw_mbytes_per_sec": 0, 00:25:42.040 "r_mbytes_per_sec": 0, 00:25:42.040 "w_mbytes_per_sec": 0 00:25:42.040 }, 00:25:42.040 "claimed": false, 00:25:42.040 "zoned": false, 00:25:42.040 "supported_io_types": { 00:25:42.040 "read": true, 00:25:42.040 "write": true, 00:25:42.040 "unmap": false, 00:25:42.040 "write_zeroes": true, 00:25:42.040 "flush": false, 00:25:42.040 "reset": true, 00:25:42.040 "compare": false, 00:25:42.040 "compare_and_write": false, 00:25:42.040 "abort": false, 00:25:42.040 "nvme_admin": false, 00:25:42.040 "nvme_io": false 00:25:42.040 }, 00:25:42.040 "memory_domains": [ 00:25:42.040 { 00:25:42.040 "dma_device_id": "system", 00:25:42.040 "dma_device_type": 1 00:25:42.040 }, 00:25:42.040 { 00:25:42.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.040 "dma_device_type": 2 00:25:42.040 }, 00:25:42.040 { 00:25:42.040 "dma_device_id": "system", 00:25:42.040 "dma_device_type": 1 00:25:42.040 }, 00:25:42.040 { 00:25:42.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.040 "dma_device_type": 2 00:25:42.040 } 00:25:42.040 ], 00:25:42.040 "driver_specific": { 00:25:42.040 "raid": { 00:25:42.040 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:42.040 "strip_size_kb": 0, 00:25:42.040 "state": "online", 00:25:42.040 "raid_level": "raid1", 00:25:42.040 "superblock": true, 00:25:42.040 "num_base_bdevs": 2, 00:25:42.040 "num_base_bdevs_discovered": 2, 00:25:42.040 "num_base_bdevs_operational": 2, 00:25:42.040 "base_bdevs_list": [ 00:25:42.040 { 00:25:42.040 "name": "pt1", 00:25:42.040 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:42.040 "is_configured": true, 00:25:42.040 "data_offset": 2048, 00:25:42.040 "data_size": 63488 00:25:42.040 }, 00:25:42.040 { 00:25:42.040 "name": "pt2", 00:25:42.040 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:42.040 "is_configured": true, 00:25:42.040 "data_offset": 2048, 00:25:42.040 "data_size": 63488 00:25:42.040 } 00:25:42.040 ] 00:25:42.040 } 00:25:42.040 } 00:25:42.040 }' 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:42.040 pt2' 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:42.040 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:42.298 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:42.298 "name": "pt1", 00:25:42.298 "aliases": [ 00:25:42.298 "7bdc0e42-33aa-5fc2-849a-9f38912c5c71" 00:25:42.298 ], 00:25:42.298 "product_name": "passthru", 00:25:42.298 "block_size": 512, 00:25:42.298 "num_blocks": 65536, 00:25:42.298 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:42.298 "assigned_rate_limits": { 00:25:42.298 "rw_ios_per_sec": 0, 00:25:42.298 "rw_mbytes_per_sec": 0, 00:25:42.298 "r_mbytes_per_sec": 0, 00:25:42.298 "w_mbytes_per_sec": 0 00:25:42.298 }, 00:25:42.298 "claimed": true, 00:25:42.298 "claim_type": "exclusive_write", 00:25:42.298 "zoned": false, 00:25:42.298 "supported_io_types": { 00:25:42.298 "read": true, 00:25:42.298 "write": true, 00:25:42.298 "unmap": true, 00:25:42.298 "write_zeroes": true, 00:25:42.298 "flush": true, 00:25:42.298 "reset": true, 00:25:42.298 "compare": false, 00:25:42.298 "compare_and_write": false, 00:25:42.298 "abort": true, 00:25:42.298 "nvme_admin": false, 00:25:42.298 "nvme_io": false 00:25:42.298 }, 00:25:42.298 "memory_domains": [ 00:25:42.298 { 00:25:42.298 "dma_device_id": "system", 00:25:42.298 "dma_device_type": 1 00:25:42.298 }, 00:25:42.298 { 00:25:42.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.298 "dma_device_type": 2 00:25:42.298 } 00:25:42.298 ], 00:25:42.298 "driver_specific": { 00:25:42.298 "passthru": { 00:25:42.298 "name": "pt1", 00:25:42.298 "base_bdev_name": "malloc1" 00:25:42.298 } 00:25:42.298 } 00:25:42.298 }' 00:25:42.298 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:42.298 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:42.298 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:42.298 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:42.560 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:42.560 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:42.560 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:42.560 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:42.817 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:42.817 "name": "pt2", 00:25:42.817 "aliases": [ 00:25:42.817 "b08fb239-8d92-5d47-aa68-8ca360223f25" 00:25:42.817 ], 00:25:42.817 "product_name": "passthru", 00:25:42.817 "block_size": 512, 00:25:42.817 "num_blocks": 65536, 00:25:42.817 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:42.817 "assigned_rate_limits": { 00:25:42.817 "rw_ios_per_sec": 0, 00:25:42.817 "rw_mbytes_per_sec": 0, 00:25:42.817 "r_mbytes_per_sec": 0, 00:25:42.817 "w_mbytes_per_sec": 0 00:25:42.817 }, 00:25:42.818 "claimed": true, 00:25:42.818 "claim_type": "exclusive_write", 00:25:42.818 "zoned": false, 00:25:42.818 "supported_io_types": { 00:25:42.818 "read": true, 00:25:42.818 "write": true, 00:25:42.818 "unmap": true, 00:25:42.818 "write_zeroes": true, 00:25:42.818 "flush": true, 00:25:42.818 "reset": true, 00:25:42.818 "compare": false, 00:25:42.818 "compare_and_write": false, 00:25:42.818 "abort": true, 00:25:42.818 "nvme_admin": false, 00:25:42.818 "nvme_io": false 00:25:42.818 }, 00:25:42.818 "memory_domains": [ 00:25:42.818 { 00:25:42.818 "dma_device_id": "system", 00:25:42.818 "dma_device_type": 1 00:25:42.818 }, 00:25:42.818 { 00:25:42.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.818 "dma_device_type": 2 00:25:42.818 } 00:25:42.818 ], 00:25:42.818 "driver_specific": { 00:25:42.818 "passthru": { 00:25:42.818 "name": "pt2", 00:25:42.818 "base_bdev_name": "malloc2" 00:25:42.818 } 00:25:42.818 } 00:25:42.818 }' 00:25:42.818 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.076 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:43.334 11:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:43.592 [2024-05-15 11:19:02.144509] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:43.592 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a5507e07-eb47-425e-b862-1a601d0bde94 00:25:43.592 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a5507e07-eb47-425e-b862-1a601d0bde94 ']' 00:25:43.592 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:43.850 [2024-05-15 11:19:02.384392] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.850 [2024-05-15 11:19:02.384450] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.850 [2024-05-15 11:19:02.384528] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.850 [2024-05-15 11:19:02.384578] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.850 [2024-05-15 11:19:02.384591] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:25:43.850 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:43.850 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.108 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:44.108 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:44.108 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.108 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:44.366 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.366 11:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:44.624 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:44.624 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:44.881 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:45.138 [2024-05-15 11:19:03.604570] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:45.139 [2024-05-15 11:19:03.606163] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:45.139 [2024-05-15 11:19:03.606224] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:45.139 [2024-05-15 11:19:03.606291] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:45.139 [2024-05-15 11:19:03.606331] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.139 [2024-05-15 11:19:03.606345] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:25:45.139 request: 00:25:45.139 { 00:25:45.139 "name": "raid_bdev1", 00:25:45.139 "raid_level": "raid1", 00:25:45.139 "base_bdevs": [ 00:25:45.139 "malloc1", 00:25:45.139 "malloc2" 00:25:45.139 ], 00:25:45.139 "superblock": false, 00:25:45.139 "method": "bdev_raid_create", 00:25:45.139 "req_id": 1 00:25:45.139 } 00:25:45.139 Got JSON-RPC error response 00:25:45.139 response: 00:25:45.139 { 00:25:45.139 "code": -17, 00:25:45.139 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:45.139 } 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:45.139 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.486 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:45.486 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:45.486 11:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:45.486 [2024-05-15 11:19:04.080624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:45.486 [2024-05-15 11:19:04.080733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.486 [2024-05-15 11:19:04.080783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:25:45.486 [2024-05-15 11:19:04.080989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.486 [2024-05-15 11:19:04.082702] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.486 [2024-05-15 11:19:04.082751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:45.486 [2024-05-15 11:19:04.082862] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:45.486 [2024-05-15 11:19:04.082926] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:45.486 pt1 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.486 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.747 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.747 "name": "raid_bdev1", 00:25:45.747 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:45.747 "strip_size_kb": 0, 00:25:45.747 "state": "configuring", 00:25:45.747 "raid_level": "raid1", 00:25:45.747 "superblock": true, 00:25:45.747 "num_base_bdevs": 2, 00:25:45.747 "num_base_bdevs_discovered": 1, 00:25:45.747 "num_base_bdevs_operational": 2, 00:25:45.747 "base_bdevs_list": [ 00:25:45.747 { 00:25:45.747 "name": "pt1", 00:25:45.747 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:45.747 "is_configured": true, 00:25:45.747 "data_offset": 2048, 00:25:45.747 "data_size": 63488 00:25:45.747 }, 00:25:45.747 { 00:25:45.747 "name": null, 00:25:45.747 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:45.747 "is_configured": false, 00:25:45.747 "data_offset": 2048, 00:25:45.747 "data_size": 63488 00:25:45.747 } 00:25:45.747 ] 00:25:45.747 }' 00:25:45.747 11:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.747 11:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:46.684 [2024-05-15 11:19:05.228831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:46.684 [2024-05-15 11:19:05.228946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.684 [2024-05-15 11:19:05.229007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:25:46.684 [2024-05-15 11:19:05.229049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.684 [2024-05-15 11:19:05.229416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.684 [2024-05-15 11:19:05.229459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:46.684 [2024-05-15 11:19:05.229546] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:46.684 [2024-05-15 11:19:05.229572] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:46.684 [2024-05-15 11:19:05.229664] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:25:46.684 [2024-05-15 11:19:05.229678] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:46.684 [2024-05-15 11:19:05.229762] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:46.684 [2024-05-15 11:19:05.230146] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:25:46.684 [2024-05-15 11:19:05.230168] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:25:46.684 [2024-05-15 11:19:05.230270] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.684 pt2 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.684 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.943 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.943 "name": "raid_bdev1", 00:25:46.943 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:46.943 "strip_size_kb": 0, 00:25:46.943 "state": "online", 00:25:46.943 "raid_level": "raid1", 00:25:46.943 "superblock": true, 00:25:46.943 "num_base_bdevs": 2, 00:25:46.943 "num_base_bdevs_discovered": 2, 00:25:46.943 "num_base_bdevs_operational": 2, 00:25:46.943 "base_bdevs_list": [ 00:25:46.943 { 00:25:46.943 "name": "pt1", 00:25:46.943 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:46.943 "is_configured": true, 00:25:46.943 "data_offset": 2048, 00:25:46.943 "data_size": 63488 00:25:46.943 }, 00:25:46.943 { 00:25:46.943 "name": "pt2", 00:25:46.943 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:46.943 "is_configured": true, 00:25:46.943 "data_offset": 2048, 00:25:46.943 "data_size": 63488 00:25:46.943 } 00:25:46.943 ] 00:25:46.943 }' 00:25:46.943 11:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.943 11:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:47.509 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:47.767 [2024-05-15 11:19:06.365138] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.767 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:47.767 "name": "raid_bdev1", 00:25:47.767 "aliases": [ 00:25:47.767 "a5507e07-eb47-425e-b862-1a601d0bde94" 00:25:47.767 ], 00:25:47.767 "product_name": "Raid Volume", 00:25:47.767 "block_size": 512, 00:25:47.767 "num_blocks": 63488, 00:25:47.767 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:47.767 "assigned_rate_limits": { 00:25:47.767 "rw_ios_per_sec": 0, 00:25:47.767 "rw_mbytes_per_sec": 0, 00:25:47.767 "r_mbytes_per_sec": 0, 00:25:47.767 "w_mbytes_per_sec": 0 00:25:47.767 }, 00:25:47.767 "claimed": false, 00:25:47.767 "zoned": false, 00:25:47.767 "supported_io_types": { 00:25:47.767 "read": true, 00:25:47.767 "write": true, 00:25:47.767 "unmap": false, 00:25:47.767 "write_zeroes": true, 00:25:47.767 "flush": false, 00:25:47.767 "reset": true, 00:25:47.767 "compare": false, 00:25:47.767 "compare_and_write": false, 00:25:47.767 "abort": false, 00:25:47.767 "nvme_admin": false, 00:25:47.767 "nvme_io": false 00:25:47.767 }, 00:25:47.767 "memory_domains": [ 00:25:47.767 { 00:25:47.767 "dma_device_id": "system", 00:25:47.767 "dma_device_type": 1 00:25:47.767 }, 00:25:47.768 { 00:25:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.768 "dma_device_type": 2 00:25:47.768 }, 00:25:47.768 { 00:25:47.768 "dma_device_id": "system", 00:25:47.768 "dma_device_type": 1 00:25:47.768 }, 00:25:47.768 { 00:25:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.768 "dma_device_type": 2 00:25:47.768 } 00:25:47.768 ], 00:25:47.768 "driver_specific": { 00:25:47.768 "raid": { 00:25:47.768 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:47.768 "strip_size_kb": 0, 00:25:47.768 "state": "online", 00:25:47.768 "raid_level": "raid1", 00:25:47.768 "superblock": true, 00:25:47.768 "num_base_bdevs": 2, 00:25:47.768 "num_base_bdevs_discovered": 2, 00:25:47.768 "num_base_bdevs_operational": 2, 00:25:47.768 "base_bdevs_list": [ 00:25:47.768 { 00:25:47.768 "name": "pt1", 00:25:47.768 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:47.768 "is_configured": true, 00:25:47.768 "data_offset": 2048, 00:25:47.768 "data_size": 63488 00:25:47.768 }, 00:25:47.768 { 00:25:47.768 "name": "pt2", 00:25:47.768 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:47.768 "is_configured": true, 00:25:47.768 "data_offset": 2048, 00:25:47.768 "data_size": 63488 00:25:47.768 } 00:25:47.768 ] 00:25:47.768 } 00:25:47.768 } 00:25:47.768 }' 00:25:47.768 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.026 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:48.026 pt2' 00:25:48.026 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:48.026 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:48.026 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:48.026 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:48.027 "name": "pt1", 00:25:48.027 "aliases": [ 00:25:48.027 "7bdc0e42-33aa-5fc2-849a-9f38912c5c71" 00:25:48.027 ], 00:25:48.027 "product_name": "passthru", 00:25:48.027 "block_size": 512, 00:25:48.027 "num_blocks": 65536, 00:25:48.027 "uuid": "7bdc0e42-33aa-5fc2-849a-9f38912c5c71", 00:25:48.027 "assigned_rate_limits": { 00:25:48.027 "rw_ios_per_sec": 0, 00:25:48.027 "rw_mbytes_per_sec": 0, 00:25:48.027 "r_mbytes_per_sec": 0, 00:25:48.027 "w_mbytes_per_sec": 0 00:25:48.027 }, 00:25:48.027 "claimed": true, 00:25:48.027 "claim_type": "exclusive_write", 00:25:48.027 "zoned": false, 00:25:48.027 "supported_io_types": { 00:25:48.027 "read": true, 00:25:48.027 "write": true, 00:25:48.027 "unmap": true, 00:25:48.027 "write_zeroes": true, 00:25:48.027 "flush": true, 00:25:48.027 "reset": true, 00:25:48.027 "compare": false, 00:25:48.027 "compare_and_write": false, 00:25:48.027 "abort": true, 00:25:48.027 "nvme_admin": false, 00:25:48.027 "nvme_io": false 00:25:48.027 }, 00:25:48.027 "memory_domains": [ 00:25:48.027 { 00:25:48.027 "dma_device_id": "system", 00:25:48.027 "dma_device_type": 1 00:25:48.027 }, 00:25:48.027 { 00:25:48.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.027 "dma_device_type": 2 00:25:48.027 } 00:25:48.027 ], 00:25:48.027 "driver_specific": { 00:25:48.027 "passthru": { 00:25:48.027 "name": "pt1", 00:25:48.027 "base_bdev_name": "malloc1" 00:25:48.027 } 00:25:48.027 } 00:25:48.027 }' 00:25:48.027 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:48.285 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:48.547 11:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:48.547 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:48.820 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:48.820 "name": "pt2", 00:25:48.820 "aliases": [ 00:25:48.820 "b08fb239-8d92-5d47-aa68-8ca360223f25" 00:25:48.820 ], 00:25:48.820 "product_name": "passthru", 00:25:48.820 "block_size": 512, 00:25:48.820 "num_blocks": 65536, 00:25:48.820 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:48.820 "assigned_rate_limits": { 00:25:48.820 "rw_ios_per_sec": 0, 00:25:48.821 "rw_mbytes_per_sec": 0, 00:25:48.821 "r_mbytes_per_sec": 0, 00:25:48.821 "w_mbytes_per_sec": 0 00:25:48.821 }, 00:25:48.821 "claimed": true, 00:25:48.821 "claim_type": "exclusive_write", 00:25:48.821 "zoned": false, 00:25:48.821 "supported_io_types": { 00:25:48.821 "read": true, 00:25:48.821 "write": true, 00:25:48.821 "unmap": true, 00:25:48.821 "write_zeroes": true, 00:25:48.821 "flush": true, 00:25:48.821 "reset": true, 00:25:48.821 "compare": false, 00:25:48.821 "compare_and_write": false, 00:25:48.821 "abort": true, 00:25:48.821 "nvme_admin": false, 00:25:48.821 "nvme_io": false 00:25:48.821 }, 00:25:48.821 "memory_domains": [ 00:25:48.821 { 00:25:48.821 "dma_device_id": "system", 00:25:48.821 "dma_device_type": 1 00:25:48.821 }, 00:25:48.821 { 00:25:48.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.821 "dma_device_type": 2 00:25:48.821 } 00:25:48.821 ], 00:25:48.821 "driver_specific": { 00:25:48.821 "passthru": { 00:25:48.821 "name": "pt2", 00:25:48.821 "base_bdev_name": "malloc2" 00:25:48.821 } 00:25:48.821 } 00:25:48.821 }' 00:25:48.821 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:48.821 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:49.078 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.336 11:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:49.594 [2024-05-15 11:19:08.037420] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.594 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a5507e07-eb47-425e-b862-1a601d0bde94 '!=' a5507e07-eb47-425e-b862-1a601d0bde94 ']' 00:25:49.594 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:49.594 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:49.594 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:25:49.594 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:49.594 [2024-05-15 11:19:08.225336] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.853 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.112 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.112 "name": "raid_bdev1", 00:25:50.112 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:50.112 "strip_size_kb": 0, 00:25:50.112 "state": "online", 00:25:50.112 "raid_level": "raid1", 00:25:50.112 "superblock": true, 00:25:50.112 "num_base_bdevs": 2, 00:25:50.112 "num_base_bdevs_discovered": 1, 00:25:50.112 "num_base_bdevs_operational": 1, 00:25:50.112 "base_bdevs_list": [ 00:25:50.112 { 00:25:50.112 "name": null, 00:25:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.112 "is_configured": false, 00:25:50.112 "data_offset": 2048, 00:25:50.112 "data_size": 63488 00:25:50.112 }, 00:25:50.112 { 00:25:50.112 "name": "pt2", 00:25:50.112 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:50.112 "is_configured": true, 00:25:50.112 "data_offset": 2048, 00:25:50.112 "data_size": 63488 00:25:50.112 } 00:25:50.112 ] 00:25:50.112 }' 00:25:50.112 11:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.112 11:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.677 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:50.935 [2024-05-15 11:19:09.457462] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:50.935 [2024-05-15 11:19:09.457513] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:50.935 [2024-05-15 11:19:09.457585] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.935 [2024-05-15 11:19:09.457624] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.935 [2024-05-15 11:19:09.457636] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:25:50.935 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.935 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:51.193 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:51.193 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:51.193 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:51.193 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:51.193 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:25:51.451 11:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:51.710 [2024-05-15 11:19:10.157564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:51.710 [2024-05-15 11:19:10.157718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.710 [2024-05-15 11:19:10.157774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:25:51.710 [2024-05-15 11:19:10.158051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.710 [2024-05-15 11:19:10.160144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.710 [2024-05-15 11:19:10.160210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:51.710 [2024-05-15 11:19:10.160334] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:51.710 [2024-05-15 11:19:10.160394] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:51.710 [2024-05-15 11:19:10.160489] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:25:51.710 [2024-05-15 11:19:10.160506] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:51.710 [2024-05-15 11:19:10.160617] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:51.710 [2024-05-15 11:19:10.160895] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:25:51.710 [2024-05-15 11:19:10.160915] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:25:51.710 [2024-05-15 11:19:10.161035] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.710 pt2 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.710 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.975 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.975 "name": "raid_bdev1", 00:25:51.975 "uuid": "a5507e07-eb47-425e-b862-1a601d0bde94", 00:25:51.975 "strip_size_kb": 0, 00:25:51.975 "state": "online", 00:25:51.975 "raid_level": "raid1", 00:25:51.975 "superblock": true, 00:25:51.975 "num_base_bdevs": 2, 00:25:51.975 "num_base_bdevs_discovered": 1, 00:25:51.975 "num_base_bdevs_operational": 1, 00:25:51.975 "base_bdevs_list": [ 00:25:51.975 { 00:25:51.975 "name": null, 00:25:51.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.975 "is_configured": false, 00:25:51.975 "data_offset": 2048, 00:25:51.975 "data_size": 63488 00:25:51.975 }, 00:25:51.975 { 00:25:51.975 "name": "pt2", 00:25:51.975 "uuid": "b08fb239-8d92-5d47-aa68-8ca360223f25", 00:25:51.975 "is_configured": true, 00:25:51.975 "data_offset": 2048, 00:25:51.975 "data_size": 63488 00:25:51.975 } 00:25:51.975 ] 00:25:51.975 }' 00:25:51.975 11:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.975 11:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.541 11:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:25:52.541 11:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:52.541 11:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:25:52.799 [2024-05-15 11:19:11.377859] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' a5507e07-eb47-425e-b862-1a601d0bde94 '!=' a5507e07-eb47-425e-b862-1a601d0bde94 ']' 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 56190 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 56190 ']' 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 56190 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 56190 00:25:52.799 killing process with pid 56190 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56190' 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 56190 00:25:52.799 11:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 56190 00:25:52.799 [2024-05-15 11:19:11.425638] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:52.799 [2024-05-15 11:19:11.425726] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:52.799 [2024-05-15 11:19:11.425767] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:52.799 [2024-05-15 11:19:11.425778] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:25:53.057 [2024-05-15 11:19:11.627496] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:54.432 ************************************ 00:25:54.432 END TEST raid_superblock_test 00:25:54.432 ************************************ 00:25:54.432 11:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:25:54.432 00:25:54.432 real 0m15.485s 00:25:54.432 user 0m28.278s 00:25:54.432 sys 0m1.583s 00:25:54.432 11:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:54.432 11:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.432 11:19:12 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:25:54.432 11:19:12 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:25:54.432 11:19:12 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:25:54.432 11:19:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:54.432 11:19:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:54.432 11:19:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:54.432 ************************************ 00:25:54.432 START TEST raid_state_function_test 00:25:54.432 ************************************ 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:25:54.432 Process raid pid: 56683 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=56683 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 56683' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 56683 /var/tmp/spdk-raid.sock 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 56683 ']' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:54.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:54.432 11:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.690 [2024-05-15 11:19:13.068146] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:25:54.690 [2024-05-15 11:19:13.068347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.690 [2024-05-15 11:19:13.229532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.948 [2024-05-15 11:19:13.484554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.206 [2024-05-15 11:19:13.692510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:55.464 11:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.464 11:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:25:55.464 11:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:55.722 [2024-05-15 11:19:14.107289] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:55.722 [2024-05-15 11:19:14.107395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:55.722 [2024-05-15 11:19:14.107415] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.722 [2024-05-15 11:19:14.107439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.722 [2024-05-15 11:19:14.107450] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.722 [2024-05-15 11:19:14.107504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.722 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.980 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.980 "name": "Existed_Raid", 00:25:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.981 "strip_size_kb": 64, 00:25:55.981 "state": "configuring", 00:25:55.981 "raid_level": "raid0", 00:25:55.981 "superblock": false, 00:25:55.981 "num_base_bdevs": 3, 00:25:55.981 "num_base_bdevs_discovered": 0, 00:25:55.981 "num_base_bdevs_operational": 3, 00:25:55.981 "base_bdevs_list": [ 00:25:55.981 { 00:25:55.981 "name": "BaseBdev1", 00:25:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.981 "is_configured": false, 00:25:55.981 "data_offset": 0, 00:25:55.981 "data_size": 0 00:25:55.981 }, 00:25:55.981 { 00:25:55.981 "name": "BaseBdev2", 00:25:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.981 "is_configured": false, 00:25:55.981 "data_offset": 0, 00:25:55.981 "data_size": 0 00:25:55.981 }, 00:25:55.981 { 00:25:55.981 "name": "BaseBdev3", 00:25:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.981 "is_configured": false, 00:25:55.981 "data_offset": 0, 00:25:55.981 "data_size": 0 00:25:55.981 } 00:25:55.981 ] 00:25:55.981 }' 00:25:55.981 11:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.981 11:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.546 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:56.805 [2024-05-15 11:19:15.307332] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:56.805 [2024-05-15 11:19:15.307398] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:25:56.805 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:57.063 [2024-05-15 11:19:15.511391] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:57.063 [2024-05-15 11:19:15.511477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:57.063 [2024-05-15 11:19:15.511493] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:57.063 [2024-05-15 11:19:15.511520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:57.063 [2024-05-15 11:19:15.511531] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:57.063 [2024-05-15 11:19:15.511561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:57.063 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:57.321 [2024-05-15 11:19:15.754484] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:57.321 BaseBdev1 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:57.321 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:57.580 [ 00:25:57.580 { 00:25:57.580 "name": "BaseBdev1", 00:25:57.580 "aliases": [ 00:25:57.580 "d420b290-645e-412e-89e2-035a0955caf3" 00:25:57.580 ], 00:25:57.580 "product_name": "Malloc disk", 00:25:57.580 "block_size": 512, 00:25:57.580 "num_blocks": 65536, 00:25:57.580 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:25:57.580 "assigned_rate_limits": { 00:25:57.580 "rw_ios_per_sec": 0, 00:25:57.580 "rw_mbytes_per_sec": 0, 00:25:57.580 "r_mbytes_per_sec": 0, 00:25:57.580 "w_mbytes_per_sec": 0 00:25:57.580 }, 00:25:57.580 "claimed": true, 00:25:57.580 "claim_type": "exclusive_write", 00:25:57.580 "zoned": false, 00:25:57.580 "supported_io_types": { 00:25:57.580 "read": true, 00:25:57.580 "write": true, 00:25:57.580 "unmap": true, 00:25:57.580 "write_zeroes": true, 00:25:57.580 "flush": true, 00:25:57.580 "reset": true, 00:25:57.580 "compare": false, 00:25:57.580 "compare_and_write": false, 00:25:57.580 "abort": true, 00:25:57.580 "nvme_admin": false, 00:25:57.580 "nvme_io": false 00:25:57.580 }, 00:25:57.580 "memory_domains": [ 00:25:57.580 { 00:25:57.580 "dma_device_id": "system", 00:25:57.580 "dma_device_type": 1 00:25:57.580 }, 00:25:57.580 { 00:25:57.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.580 "dma_device_type": 2 00:25:57.580 } 00:25:57.580 ], 00:25:57.580 "driver_specific": {} 00:25:57.580 } 00:25:57.580 ] 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.580 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.838 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.838 "name": "Existed_Raid", 00:25:57.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.838 "strip_size_kb": 64, 00:25:57.838 "state": "configuring", 00:25:57.838 "raid_level": "raid0", 00:25:57.838 "superblock": false, 00:25:57.838 "num_base_bdevs": 3, 00:25:57.838 "num_base_bdevs_discovered": 1, 00:25:57.838 "num_base_bdevs_operational": 3, 00:25:57.838 "base_bdevs_list": [ 00:25:57.838 { 00:25:57.838 "name": "BaseBdev1", 00:25:57.838 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:25:57.838 "is_configured": true, 00:25:57.838 "data_offset": 0, 00:25:57.838 "data_size": 65536 00:25:57.838 }, 00:25:57.838 { 00:25:57.838 "name": "BaseBdev2", 00:25:57.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.838 "is_configured": false, 00:25:57.838 "data_offset": 0, 00:25:57.838 "data_size": 0 00:25:57.838 }, 00:25:57.838 { 00:25:57.838 "name": "BaseBdev3", 00:25:57.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.838 "is_configured": false, 00:25:57.838 "data_offset": 0, 00:25:57.838 "data_size": 0 00:25:57.838 } 00:25:57.838 ] 00:25:57.838 }' 00:25:57.838 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.838 11:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.405 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:58.663 [2024-05-15 11:19:17.194691] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:58.663 [2024-05-15 11:19:17.194756] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:25:58.663 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:58.922 [2024-05-15 11:19:17.442775] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.922 [2024-05-15 11:19:17.444435] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:58.922 [2024-05-15 11:19:17.444488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:58.922 [2024-05-15 11:19:17.444501] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:58.922 [2024-05-15 11:19:17.444528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.922 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.180 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.180 "name": "Existed_Raid", 00:25:59.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.180 "strip_size_kb": 64, 00:25:59.180 "state": "configuring", 00:25:59.180 "raid_level": "raid0", 00:25:59.180 "superblock": false, 00:25:59.180 "num_base_bdevs": 3, 00:25:59.180 "num_base_bdevs_discovered": 1, 00:25:59.180 "num_base_bdevs_operational": 3, 00:25:59.180 "base_bdevs_list": [ 00:25:59.180 { 00:25:59.180 "name": "BaseBdev1", 00:25:59.180 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:25:59.180 "is_configured": true, 00:25:59.180 "data_offset": 0, 00:25:59.180 "data_size": 65536 00:25:59.180 }, 00:25:59.180 { 00:25:59.180 "name": "BaseBdev2", 00:25:59.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.180 "is_configured": false, 00:25:59.180 "data_offset": 0, 00:25:59.180 "data_size": 0 00:25:59.180 }, 00:25:59.180 { 00:25:59.180 "name": "BaseBdev3", 00:25:59.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.180 "is_configured": false, 00:25:59.180 "data_offset": 0, 00:25:59.180 "data_size": 0 00:25:59.180 } 00:25:59.180 ] 00:25:59.180 }' 00:25:59.180 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.180 11:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:00.003 [2024-05-15 11:19:18.586374] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.003 BaseBdev2 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:00.003 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:00.260 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:00.517 [ 00:26:00.517 { 00:26:00.517 "name": "BaseBdev2", 00:26:00.517 "aliases": [ 00:26:00.517 "f4b4573c-5baf-4b54-98a4-bf8a4047c187" 00:26:00.517 ], 00:26:00.517 "product_name": "Malloc disk", 00:26:00.517 "block_size": 512, 00:26:00.517 "num_blocks": 65536, 00:26:00.517 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:00.517 "assigned_rate_limits": { 00:26:00.517 "rw_ios_per_sec": 0, 00:26:00.517 "rw_mbytes_per_sec": 0, 00:26:00.517 "r_mbytes_per_sec": 0, 00:26:00.517 "w_mbytes_per_sec": 0 00:26:00.517 }, 00:26:00.517 "claimed": true, 00:26:00.517 "claim_type": "exclusive_write", 00:26:00.517 "zoned": false, 00:26:00.517 "supported_io_types": { 00:26:00.517 "read": true, 00:26:00.517 "write": true, 00:26:00.517 "unmap": true, 00:26:00.517 "write_zeroes": true, 00:26:00.517 "flush": true, 00:26:00.517 "reset": true, 00:26:00.517 "compare": false, 00:26:00.517 "compare_and_write": false, 00:26:00.517 "abort": true, 00:26:00.517 "nvme_admin": false, 00:26:00.517 "nvme_io": false 00:26:00.517 }, 00:26:00.517 "memory_domains": [ 00:26:00.517 { 00:26:00.517 "dma_device_id": "system", 00:26:00.518 "dma_device_type": 1 00:26:00.518 }, 00:26:00.518 { 00:26:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.518 "dma_device_type": 2 00:26:00.518 } 00:26:00.518 ], 00:26:00.518 "driver_specific": {} 00:26:00.518 } 00:26:00.518 ] 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.518 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.773 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.773 "name": "Existed_Raid", 00:26:00.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.773 "strip_size_kb": 64, 00:26:00.773 "state": "configuring", 00:26:00.773 "raid_level": "raid0", 00:26:00.773 "superblock": false, 00:26:00.773 "num_base_bdevs": 3, 00:26:00.773 "num_base_bdevs_discovered": 2, 00:26:00.773 "num_base_bdevs_operational": 3, 00:26:00.773 "base_bdevs_list": [ 00:26:00.773 { 00:26:00.773 "name": "BaseBdev1", 00:26:00.773 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:26:00.773 "is_configured": true, 00:26:00.773 "data_offset": 0, 00:26:00.773 "data_size": 65536 00:26:00.773 }, 00:26:00.773 { 00:26:00.773 "name": "BaseBdev2", 00:26:00.774 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:00.774 "is_configured": true, 00:26:00.774 "data_offset": 0, 00:26:00.774 "data_size": 65536 00:26:00.774 }, 00:26:00.774 { 00:26:00.774 "name": "BaseBdev3", 00:26:00.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.774 "is_configured": false, 00:26:00.774 "data_offset": 0, 00:26:00.774 "data_size": 0 00:26:00.774 } 00:26:00.774 ] 00:26:00.774 }' 00:26:00.774 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.774 11:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:01.594 [2024-05-15 11:19:20.159738] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:01.594 [2024-05-15 11:19:20.159789] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:26:01.594 [2024-05-15 11:19:20.159799] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:01.594 BaseBdev3 00:26:01.594 [2024-05-15 11:19:20.160145] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:26:01.594 [2024-05-15 11:19:20.160403] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:26:01.594 [2024-05-15 11:19:20.160418] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:26:01.594 [2024-05-15 11:19:20.160613] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:01.594 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:01.850 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:02.107 [ 00:26:02.107 { 00:26:02.107 "name": "BaseBdev3", 00:26:02.107 "aliases": [ 00:26:02.107 "7d57aadb-affa-492a-ab4c-7416e2659e5c" 00:26:02.107 ], 00:26:02.107 "product_name": "Malloc disk", 00:26:02.107 "block_size": 512, 00:26:02.107 "num_blocks": 65536, 00:26:02.107 "uuid": "7d57aadb-affa-492a-ab4c-7416e2659e5c", 00:26:02.107 "assigned_rate_limits": { 00:26:02.107 "rw_ios_per_sec": 0, 00:26:02.107 "rw_mbytes_per_sec": 0, 00:26:02.107 "r_mbytes_per_sec": 0, 00:26:02.107 "w_mbytes_per_sec": 0 00:26:02.107 }, 00:26:02.107 "claimed": true, 00:26:02.107 "claim_type": "exclusive_write", 00:26:02.107 "zoned": false, 00:26:02.107 "supported_io_types": { 00:26:02.107 "read": true, 00:26:02.107 "write": true, 00:26:02.107 "unmap": true, 00:26:02.107 "write_zeroes": true, 00:26:02.107 "flush": true, 00:26:02.107 "reset": true, 00:26:02.107 "compare": false, 00:26:02.107 "compare_and_write": false, 00:26:02.107 "abort": true, 00:26:02.107 "nvme_admin": false, 00:26:02.107 "nvme_io": false 00:26:02.107 }, 00:26:02.107 "memory_domains": [ 00:26:02.107 { 00:26:02.107 "dma_device_id": "system", 00:26:02.107 "dma_device_type": 1 00:26:02.107 }, 00:26:02.107 { 00:26:02.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.107 "dma_device_type": 2 00:26:02.107 } 00:26:02.107 ], 00:26:02.107 "driver_specific": {} 00:26:02.107 } 00:26:02.107 ] 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.107 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.364 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:02.364 "name": "Existed_Raid", 00:26:02.364 "uuid": "e1677d9e-5fd0-48de-90b2-10fd5338cbfa", 00:26:02.364 "strip_size_kb": 64, 00:26:02.364 "state": "online", 00:26:02.364 "raid_level": "raid0", 00:26:02.364 "superblock": false, 00:26:02.364 "num_base_bdevs": 3, 00:26:02.364 "num_base_bdevs_discovered": 3, 00:26:02.364 "num_base_bdevs_operational": 3, 00:26:02.364 "base_bdevs_list": [ 00:26:02.364 { 00:26:02.364 "name": "BaseBdev1", 00:26:02.364 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:26:02.364 "is_configured": true, 00:26:02.364 "data_offset": 0, 00:26:02.364 "data_size": 65536 00:26:02.364 }, 00:26:02.364 { 00:26:02.364 "name": "BaseBdev2", 00:26:02.364 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:02.364 "is_configured": true, 00:26:02.364 "data_offset": 0, 00:26:02.364 "data_size": 65536 00:26:02.364 }, 00:26:02.364 { 00:26:02.364 "name": "BaseBdev3", 00:26:02.364 "uuid": "7d57aadb-affa-492a-ab4c-7416e2659e5c", 00:26:02.364 "is_configured": true, 00:26:02.364 "data_offset": 0, 00:26:02.364 "data_size": 65536 00:26:02.364 } 00:26:02.364 ] 00:26:02.364 }' 00:26:02.364 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:02.364 11:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:02.929 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:03.249 [2024-05-15 11:19:21.736166] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:03.249 "name": "Existed_Raid", 00:26:03.249 "aliases": [ 00:26:03.249 "e1677d9e-5fd0-48de-90b2-10fd5338cbfa" 00:26:03.249 ], 00:26:03.249 "product_name": "Raid Volume", 00:26:03.249 "block_size": 512, 00:26:03.249 "num_blocks": 196608, 00:26:03.249 "uuid": "e1677d9e-5fd0-48de-90b2-10fd5338cbfa", 00:26:03.249 "assigned_rate_limits": { 00:26:03.249 "rw_ios_per_sec": 0, 00:26:03.249 "rw_mbytes_per_sec": 0, 00:26:03.249 "r_mbytes_per_sec": 0, 00:26:03.249 "w_mbytes_per_sec": 0 00:26:03.249 }, 00:26:03.249 "claimed": false, 00:26:03.249 "zoned": false, 00:26:03.249 "supported_io_types": { 00:26:03.249 "read": true, 00:26:03.249 "write": true, 00:26:03.249 "unmap": true, 00:26:03.249 "write_zeroes": true, 00:26:03.249 "flush": true, 00:26:03.249 "reset": true, 00:26:03.249 "compare": false, 00:26:03.249 "compare_and_write": false, 00:26:03.249 "abort": false, 00:26:03.249 "nvme_admin": false, 00:26:03.249 "nvme_io": false 00:26:03.249 }, 00:26:03.249 "memory_domains": [ 00:26:03.249 { 00:26:03.249 "dma_device_id": "system", 00:26:03.249 "dma_device_type": 1 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.249 "dma_device_type": 2 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "dma_device_id": "system", 00:26:03.249 "dma_device_type": 1 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.249 "dma_device_type": 2 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "dma_device_id": "system", 00:26:03.249 "dma_device_type": 1 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.249 "dma_device_type": 2 00:26:03.249 } 00:26:03.249 ], 00:26:03.249 "driver_specific": { 00:26:03.249 "raid": { 00:26:03.249 "uuid": "e1677d9e-5fd0-48de-90b2-10fd5338cbfa", 00:26:03.249 "strip_size_kb": 64, 00:26:03.249 "state": "online", 00:26:03.249 "raid_level": "raid0", 00:26:03.249 "superblock": false, 00:26:03.249 "num_base_bdevs": 3, 00:26:03.249 "num_base_bdevs_discovered": 3, 00:26:03.249 "num_base_bdevs_operational": 3, 00:26:03.249 "base_bdevs_list": [ 00:26:03.249 { 00:26:03.249 "name": "BaseBdev1", 00:26:03.249 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:26:03.249 "is_configured": true, 00:26:03.249 "data_offset": 0, 00:26:03.249 "data_size": 65536 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "name": "BaseBdev2", 00:26:03.249 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:03.249 "is_configured": true, 00:26:03.249 "data_offset": 0, 00:26:03.249 "data_size": 65536 00:26:03.249 }, 00:26:03.249 { 00:26:03.249 "name": "BaseBdev3", 00:26:03.249 "uuid": "7d57aadb-affa-492a-ab4c-7416e2659e5c", 00:26:03.249 "is_configured": true, 00:26:03.249 "data_offset": 0, 00:26:03.249 "data_size": 65536 00:26:03.249 } 00:26:03.249 ] 00:26:03.249 } 00:26:03.249 } 00:26:03.249 }' 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:26:03.249 BaseBdev2 00:26:03.249 BaseBdev3' 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:03.249 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:03.507 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:03.507 "name": "BaseBdev1", 00:26:03.507 "aliases": [ 00:26:03.507 "d420b290-645e-412e-89e2-035a0955caf3" 00:26:03.507 ], 00:26:03.507 "product_name": "Malloc disk", 00:26:03.507 "block_size": 512, 00:26:03.507 "num_blocks": 65536, 00:26:03.507 "uuid": "d420b290-645e-412e-89e2-035a0955caf3", 00:26:03.507 "assigned_rate_limits": { 00:26:03.507 "rw_ios_per_sec": 0, 00:26:03.507 "rw_mbytes_per_sec": 0, 00:26:03.507 "r_mbytes_per_sec": 0, 00:26:03.507 "w_mbytes_per_sec": 0 00:26:03.507 }, 00:26:03.507 "claimed": true, 00:26:03.507 "claim_type": "exclusive_write", 00:26:03.507 "zoned": false, 00:26:03.507 "supported_io_types": { 00:26:03.507 "read": true, 00:26:03.507 "write": true, 00:26:03.507 "unmap": true, 00:26:03.507 "write_zeroes": true, 00:26:03.507 "flush": true, 00:26:03.507 "reset": true, 00:26:03.507 "compare": false, 00:26:03.507 "compare_and_write": false, 00:26:03.507 "abort": true, 00:26:03.507 "nvme_admin": false, 00:26:03.507 "nvme_io": false 00:26:03.507 }, 00:26:03.507 "memory_domains": [ 00:26:03.507 { 00:26:03.507 "dma_device_id": "system", 00:26:03.507 "dma_device_type": 1 00:26:03.507 }, 00:26:03.507 { 00:26:03.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.507 "dma_device_type": 2 00:26:03.507 } 00:26:03.507 ], 00:26:03.507 "driver_specific": {} 00:26:03.507 }' 00:26:03.507 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:03.507 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:03.507 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:03.507 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:03.765 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:04.023 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:04.023 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:04.023 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:04.023 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:04.280 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:04.280 "name": "BaseBdev2", 00:26:04.280 "aliases": [ 00:26:04.280 "f4b4573c-5baf-4b54-98a4-bf8a4047c187" 00:26:04.280 ], 00:26:04.280 "product_name": "Malloc disk", 00:26:04.280 "block_size": 512, 00:26:04.280 "num_blocks": 65536, 00:26:04.280 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:04.280 "assigned_rate_limits": { 00:26:04.280 "rw_ios_per_sec": 0, 00:26:04.280 "rw_mbytes_per_sec": 0, 00:26:04.280 "r_mbytes_per_sec": 0, 00:26:04.280 "w_mbytes_per_sec": 0 00:26:04.280 }, 00:26:04.280 "claimed": true, 00:26:04.280 "claim_type": "exclusive_write", 00:26:04.280 "zoned": false, 00:26:04.280 "supported_io_types": { 00:26:04.280 "read": true, 00:26:04.280 "write": true, 00:26:04.280 "unmap": true, 00:26:04.280 "write_zeroes": true, 00:26:04.280 "flush": true, 00:26:04.280 "reset": true, 00:26:04.280 "compare": false, 00:26:04.280 "compare_and_write": false, 00:26:04.280 "abort": true, 00:26:04.280 "nvme_admin": false, 00:26:04.280 "nvme_io": false 00:26:04.280 }, 00:26:04.280 "memory_domains": [ 00:26:04.280 { 00:26:04.280 "dma_device_id": "system", 00:26:04.280 "dma_device_type": 1 00:26:04.280 }, 00:26:04.280 { 00:26:04.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.280 "dma_device_type": 2 00:26:04.280 } 00:26:04.280 ], 00:26:04.280 "driver_specific": {} 00:26:04.280 }' 00:26:04.280 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:04.280 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:04.280 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:04.280 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:04.281 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:04.281 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:04.281 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:04.538 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:04.538 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:04.797 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:04.797 "name": "BaseBdev3", 00:26:04.797 "aliases": [ 00:26:04.797 "7d57aadb-affa-492a-ab4c-7416e2659e5c" 00:26:04.797 ], 00:26:04.797 "product_name": "Malloc disk", 00:26:04.797 "block_size": 512, 00:26:04.797 "num_blocks": 65536, 00:26:04.797 "uuid": "7d57aadb-affa-492a-ab4c-7416e2659e5c", 00:26:04.797 "assigned_rate_limits": { 00:26:04.797 "rw_ios_per_sec": 0, 00:26:04.797 "rw_mbytes_per_sec": 0, 00:26:04.797 "r_mbytes_per_sec": 0, 00:26:04.797 "w_mbytes_per_sec": 0 00:26:04.797 }, 00:26:04.797 "claimed": true, 00:26:04.797 "claim_type": "exclusive_write", 00:26:04.797 "zoned": false, 00:26:04.797 "supported_io_types": { 00:26:04.797 "read": true, 00:26:04.797 "write": true, 00:26:04.797 "unmap": true, 00:26:04.797 "write_zeroes": true, 00:26:04.797 "flush": true, 00:26:04.797 "reset": true, 00:26:04.797 "compare": false, 00:26:04.797 "compare_and_write": false, 00:26:04.797 "abort": true, 00:26:04.797 "nvme_admin": false, 00:26:04.797 "nvme_io": false 00:26:04.797 }, 00:26:04.797 "memory_domains": [ 00:26:04.797 { 00:26:04.797 "dma_device_id": "system", 00:26:04.797 "dma_device_type": 1 00:26:04.797 }, 00:26:04.797 { 00:26:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.797 "dma_device_type": 2 00:26:04.797 } 00:26:04.797 ], 00:26:04.797 "driver_specific": {} 00:26:04.797 }' 00:26:04.797 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:04.797 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:04.797 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:04.797 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:05.055 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:05.313 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:05.313 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:05.313 [2024-05-15 11:19:23.896323] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:05.313 [2024-05-15 11:19:23.896367] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.313 [2024-05-15 11:19:23.896415] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.571 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:26:05.571 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:26:05.571 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:26:05.571 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.572 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.830 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.830 "name": "Existed_Raid", 00:26:05.830 "uuid": "e1677d9e-5fd0-48de-90b2-10fd5338cbfa", 00:26:05.830 "strip_size_kb": 64, 00:26:05.830 "state": "offline", 00:26:05.830 "raid_level": "raid0", 00:26:05.830 "superblock": false, 00:26:05.830 "num_base_bdevs": 3, 00:26:05.830 "num_base_bdevs_discovered": 2, 00:26:05.830 "num_base_bdevs_operational": 2, 00:26:05.830 "base_bdevs_list": [ 00:26:05.830 { 00:26:05.830 "name": null, 00:26:05.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.830 "is_configured": false, 00:26:05.830 "data_offset": 0, 00:26:05.830 "data_size": 65536 00:26:05.830 }, 00:26:05.830 { 00:26:05.830 "name": "BaseBdev2", 00:26:05.830 "uuid": "f4b4573c-5baf-4b54-98a4-bf8a4047c187", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 0, 00:26:05.830 "data_size": 65536 00:26:05.830 }, 00:26:05.830 { 00:26:05.830 "name": "BaseBdev3", 00:26:05.830 "uuid": "7d57aadb-affa-492a-ab4c-7416e2659e5c", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 0, 00:26:05.830 "data_size": 65536 00:26:05.830 } 00:26:05.830 ] 00:26:05.830 }' 00:26:05.830 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.830 11:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.396 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:06.396 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:06.396 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.396 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:06.654 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:06.654 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:06.654 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:06.912 [2024-05-15 11:19:25.358803] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:06.912 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:06.912 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:06.912 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:06.912 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.169 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:07.169 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:07.169 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:07.427 [2024-05-15 11:19:25.892574] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:07.427 [2024-05-15 11:19:25.892635] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:26:07.427 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:07.427 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:07.427 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:26:07.427 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:07.684 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:07.942 BaseBdev2 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:07.942 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.200 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:08.458 [ 00:26:08.458 { 00:26:08.458 "name": "BaseBdev2", 00:26:08.458 "aliases": [ 00:26:08.458 "10e33014-cfe8-4857-b7d4-e5b041d19f58" 00:26:08.458 ], 00:26:08.458 "product_name": "Malloc disk", 00:26:08.458 "block_size": 512, 00:26:08.458 "num_blocks": 65536, 00:26:08.458 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:08.458 "assigned_rate_limits": { 00:26:08.458 "rw_ios_per_sec": 0, 00:26:08.458 "rw_mbytes_per_sec": 0, 00:26:08.458 "r_mbytes_per_sec": 0, 00:26:08.458 "w_mbytes_per_sec": 0 00:26:08.458 }, 00:26:08.458 "claimed": false, 00:26:08.458 "zoned": false, 00:26:08.458 "supported_io_types": { 00:26:08.458 "read": true, 00:26:08.458 "write": true, 00:26:08.458 "unmap": true, 00:26:08.458 "write_zeroes": true, 00:26:08.458 "flush": true, 00:26:08.458 "reset": true, 00:26:08.458 "compare": false, 00:26:08.458 "compare_and_write": false, 00:26:08.458 "abort": true, 00:26:08.458 "nvme_admin": false, 00:26:08.458 "nvme_io": false 00:26:08.458 }, 00:26:08.458 "memory_domains": [ 00:26:08.458 { 00:26:08.458 "dma_device_id": "system", 00:26:08.458 "dma_device_type": 1 00:26:08.458 }, 00:26:08.458 { 00:26:08.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.458 "dma_device_type": 2 00:26:08.458 } 00:26:08.458 ], 00:26:08.458 "driver_specific": {} 00:26:08.458 } 00:26:08.458 ] 00:26:08.458 11:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:08.458 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:08.458 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:08.458 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:08.716 BaseBdev3 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:08.716 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.974 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:08.974 [ 00:26:08.974 { 00:26:08.974 "name": "BaseBdev3", 00:26:08.974 "aliases": [ 00:26:08.974 "22df9e06-4c32-4969-a05a-6e4e8b52c311" 00:26:08.974 ], 00:26:08.974 "product_name": "Malloc disk", 00:26:08.974 "block_size": 512, 00:26:08.974 "num_blocks": 65536, 00:26:08.974 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:08.974 "assigned_rate_limits": { 00:26:08.974 "rw_ios_per_sec": 0, 00:26:08.974 "rw_mbytes_per_sec": 0, 00:26:08.974 "r_mbytes_per_sec": 0, 00:26:08.974 "w_mbytes_per_sec": 0 00:26:08.974 }, 00:26:08.974 "claimed": false, 00:26:08.974 "zoned": false, 00:26:08.974 "supported_io_types": { 00:26:08.974 "read": true, 00:26:08.974 "write": true, 00:26:08.974 "unmap": true, 00:26:08.974 "write_zeroes": true, 00:26:08.974 "flush": true, 00:26:08.974 "reset": true, 00:26:08.974 "compare": false, 00:26:08.974 "compare_and_write": false, 00:26:08.974 "abort": true, 00:26:08.974 "nvme_admin": false, 00:26:08.974 "nvme_io": false 00:26:08.974 }, 00:26:08.974 "memory_domains": [ 00:26:08.974 { 00:26:08.974 "dma_device_id": "system", 00:26:08.974 "dma_device_type": 1 00:26:08.974 }, 00:26:08.974 { 00:26:08.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.974 "dma_device_type": 2 00:26:08.974 } 00:26:08.974 ], 00:26:08.974 "driver_specific": {} 00:26:08.974 } 00:26:08.974 ] 00:26:08.974 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:08.974 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:08.974 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:08.974 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:09.293 [2024-05-15 11:19:27.782015] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:09.293 [2024-05-15 11:19:27.782102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:09.293 [2024-05-15 11:19:27.782126] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:09.293 [2024-05-15 11:19:27.783949] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.293 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.551 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:09.551 "name": "Existed_Raid", 00:26:09.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.551 "strip_size_kb": 64, 00:26:09.551 "state": "configuring", 00:26:09.551 "raid_level": "raid0", 00:26:09.551 "superblock": false, 00:26:09.551 "num_base_bdevs": 3, 00:26:09.551 "num_base_bdevs_discovered": 2, 00:26:09.551 "num_base_bdevs_operational": 3, 00:26:09.551 "base_bdevs_list": [ 00:26:09.551 { 00:26:09.551 "name": "BaseBdev1", 00:26:09.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.551 "is_configured": false, 00:26:09.551 "data_offset": 0, 00:26:09.551 "data_size": 0 00:26:09.551 }, 00:26:09.551 { 00:26:09.551 "name": "BaseBdev2", 00:26:09.551 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:09.551 "is_configured": true, 00:26:09.551 "data_offset": 0, 00:26:09.551 "data_size": 65536 00:26:09.551 }, 00:26:09.551 { 00:26:09.551 "name": "BaseBdev3", 00:26:09.551 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:09.551 "is_configured": true, 00:26:09.551 "data_offset": 0, 00:26:09.551 "data_size": 65536 00:26:09.551 } 00:26:09.551 ] 00:26:09.551 }' 00:26:09.551 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:09.551 11:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.116 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:10.374 [2024-05-15 11:19:28.946235] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.374 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.631 11:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.631 "name": "Existed_Raid", 00:26:10.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.631 "strip_size_kb": 64, 00:26:10.631 "state": "configuring", 00:26:10.631 "raid_level": "raid0", 00:26:10.631 "superblock": false, 00:26:10.631 "num_base_bdevs": 3, 00:26:10.631 "num_base_bdevs_discovered": 1, 00:26:10.631 "num_base_bdevs_operational": 3, 00:26:10.631 "base_bdevs_list": [ 00:26:10.631 { 00:26:10.631 "name": "BaseBdev1", 00:26:10.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.631 "is_configured": false, 00:26:10.631 "data_offset": 0, 00:26:10.631 "data_size": 0 00:26:10.631 }, 00:26:10.631 { 00:26:10.631 "name": null, 00:26:10.631 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:10.631 "is_configured": false, 00:26:10.631 "data_offset": 0, 00:26:10.631 "data_size": 65536 00:26:10.631 }, 00:26:10.631 { 00:26:10.631 "name": "BaseBdev3", 00:26:10.631 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:10.631 "is_configured": true, 00:26:10.631 "data_offset": 0, 00:26:10.631 "data_size": 65536 00:26:10.631 } 00:26:10.631 ] 00:26:10.631 }' 00:26:10.631 11:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.631 11:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.564 11:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.564 11:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:11.564 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:26:11.564 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:11.822 [2024-05-15 11:19:30.442551] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.822 BaseBdev1 00:26:11.822 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.081 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:12.340 [ 00:26:12.340 { 00:26:12.340 "name": "BaseBdev1", 00:26:12.340 "aliases": [ 00:26:12.340 "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90" 00:26:12.340 ], 00:26:12.340 "product_name": "Malloc disk", 00:26:12.340 "block_size": 512, 00:26:12.340 "num_blocks": 65536, 00:26:12.340 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:12.340 "assigned_rate_limits": { 00:26:12.340 "rw_ios_per_sec": 0, 00:26:12.340 "rw_mbytes_per_sec": 0, 00:26:12.340 "r_mbytes_per_sec": 0, 00:26:12.340 "w_mbytes_per_sec": 0 00:26:12.340 }, 00:26:12.340 "claimed": true, 00:26:12.340 "claim_type": "exclusive_write", 00:26:12.340 "zoned": false, 00:26:12.340 "supported_io_types": { 00:26:12.340 "read": true, 00:26:12.340 "write": true, 00:26:12.340 "unmap": true, 00:26:12.340 "write_zeroes": true, 00:26:12.340 "flush": true, 00:26:12.340 "reset": true, 00:26:12.340 "compare": false, 00:26:12.340 "compare_and_write": false, 00:26:12.340 "abort": true, 00:26:12.340 "nvme_admin": false, 00:26:12.340 "nvme_io": false 00:26:12.340 }, 00:26:12.340 "memory_domains": [ 00:26:12.340 { 00:26:12.340 "dma_device_id": "system", 00:26:12.340 "dma_device_type": 1 00:26:12.340 }, 00:26:12.340 { 00:26:12.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.340 "dma_device_type": 2 00:26:12.340 } 00:26:12.340 ], 00:26:12.340 "driver_specific": {} 00:26:12.340 } 00:26:12.340 ] 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.340 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.600 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:12.600 "name": "Existed_Raid", 00:26:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.600 "strip_size_kb": 64, 00:26:12.600 "state": "configuring", 00:26:12.600 "raid_level": "raid0", 00:26:12.600 "superblock": false, 00:26:12.600 "num_base_bdevs": 3, 00:26:12.600 "num_base_bdevs_discovered": 2, 00:26:12.600 "num_base_bdevs_operational": 3, 00:26:12.600 "base_bdevs_list": [ 00:26:12.600 { 00:26:12.600 "name": "BaseBdev1", 00:26:12.600 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:12.600 "is_configured": true, 00:26:12.600 "data_offset": 0, 00:26:12.600 "data_size": 65536 00:26:12.600 }, 00:26:12.600 { 00:26:12.600 "name": null, 00:26:12.600 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:12.600 "is_configured": false, 00:26:12.600 "data_offset": 0, 00:26:12.600 "data_size": 65536 00:26:12.600 }, 00:26:12.600 { 00:26:12.600 "name": "BaseBdev3", 00:26:12.600 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:12.600 "is_configured": true, 00:26:12.600 "data_offset": 0, 00:26:12.600 "data_size": 65536 00:26:12.600 } 00:26:12.600 ] 00:26:12.600 }' 00:26:12.600 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:12.600 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.166 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.166 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:13.424 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:13.424 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:13.683 [2024-05-15 11:19:32.186885] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.683 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.941 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.941 "name": "Existed_Raid", 00:26:13.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.941 "strip_size_kb": 64, 00:26:13.941 "state": "configuring", 00:26:13.941 "raid_level": "raid0", 00:26:13.941 "superblock": false, 00:26:13.941 "num_base_bdevs": 3, 00:26:13.941 "num_base_bdevs_discovered": 1, 00:26:13.941 "num_base_bdevs_operational": 3, 00:26:13.941 "base_bdevs_list": [ 00:26:13.941 { 00:26:13.941 "name": "BaseBdev1", 00:26:13.941 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:13.941 "is_configured": true, 00:26:13.941 "data_offset": 0, 00:26:13.941 "data_size": 65536 00:26:13.941 }, 00:26:13.941 { 00:26:13.941 "name": null, 00:26:13.941 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:13.942 "is_configured": false, 00:26:13.942 "data_offset": 0, 00:26:13.942 "data_size": 65536 00:26:13.942 }, 00:26:13.942 { 00:26:13.942 "name": null, 00:26:13.942 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:13.942 "is_configured": false, 00:26:13.942 "data_offset": 0, 00:26:13.942 "data_size": 65536 00:26:13.942 } 00:26:13.942 ] 00:26:13.942 }' 00:26:13.942 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.942 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.507 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.507 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:14.765 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:26:14.765 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:15.022 [2024-05-15 11:19:33.519187] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.023 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.280 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.280 "name": "Existed_Raid", 00:26:15.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.280 "strip_size_kb": 64, 00:26:15.280 "state": "configuring", 00:26:15.280 "raid_level": "raid0", 00:26:15.280 "superblock": false, 00:26:15.280 "num_base_bdevs": 3, 00:26:15.280 "num_base_bdevs_discovered": 2, 00:26:15.280 "num_base_bdevs_operational": 3, 00:26:15.280 "base_bdevs_list": [ 00:26:15.280 { 00:26:15.280 "name": "BaseBdev1", 00:26:15.280 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:15.280 "is_configured": true, 00:26:15.280 "data_offset": 0, 00:26:15.280 "data_size": 65536 00:26:15.280 }, 00:26:15.280 { 00:26:15.280 "name": null, 00:26:15.280 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:15.280 "is_configured": false, 00:26:15.280 "data_offset": 0, 00:26:15.280 "data_size": 65536 00:26:15.280 }, 00:26:15.280 { 00:26:15.280 "name": "BaseBdev3", 00:26:15.280 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:15.280 "is_configured": true, 00:26:15.280 "data_offset": 0, 00:26:15.280 "data_size": 65536 00:26:15.280 } 00:26:15.280 ] 00:26:15.280 }' 00:26:15.280 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.280 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.939 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.939 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:16.197 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:26:16.197 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:16.455 [2024-05-15 11:19:34.923331] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.455 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.712 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.712 "name": "Existed_Raid", 00:26:16.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.712 "strip_size_kb": 64, 00:26:16.712 "state": "configuring", 00:26:16.712 "raid_level": "raid0", 00:26:16.712 "superblock": false, 00:26:16.712 "num_base_bdevs": 3, 00:26:16.712 "num_base_bdevs_discovered": 1, 00:26:16.712 "num_base_bdevs_operational": 3, 00:26:16.712 "base_bdevs_list": [ 00:26:16.712 { 00:26:16.712 "name": null, 00:26:16.712 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:16.712 "is_configured": false, 00:26:16.712 "data_offset": 0, 00:26:16.712 "data_size": 65536 00:26:16.712 }, 00:26:16.712 { 00:26:16.712 "name": null, 00:26:16.712 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:16.712 "is_configured": false, 00:26:16.712 "data_offset": 0, 00:26:16.712 "data_size": 65536 00:26:16.712 }, 00:26:16.712 { 00:26:16.712 "name": "BaseBdev3", 00:26:16.712 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:16.712 "is_configured": true, 00:26:16.712 "data_offset": 0, 00:26:16.712 "data_size": 65536 00:26:16.712 } 00:26:16.712 ] 00:26:16.712 }' 00:26:16.712 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.712 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.702 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.702 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:17.702 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:26:17.702 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:17.960 [2024-05-15 11:19:36.340530] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:17.960 "name": "Existed_Raid", 00:26:17.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.960 "strip_size_kb": 64, 00:26:17.960 "state": "configuring", 00:26:17.960 "raid_level": "raid0", 00:26:17.960 "superblock": false, 00:26:17.960 "num_base_bdevs": 3, 00:26:17.960 "num_base_bdevs_discovered": 2, 00:26:17.960 "num_base_bdevs_operational": 3, 00:26:17.960 "base_bdevs_list": [ 00:26:17.960 { 00:26:17.960 "name": null, 00:26:17.960 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:17.960 "is_configured": false, 00:26:17.960 "data_offset": 0, 00:26:17.960 "data_size": 65536 00:26:17.960 }, 00:26:17.960 { 00:26:17.960 "name": "BaseBdev2", 00:26:17.960 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:17.960 "is_configured": true, 00:26:17.960 "data_offset": 0, 00:26:17.960 "data_size": 65536 00:26:17.960 }, 00:26:17.960 { 00:26:17.960 "name": "BaseBdev3", 00:26:17.960 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:17.960 "is_configured": true, 00:26:17.960 "data_offset": 0, 00:26:17.960 "data_size": 65536 00:26:17.960 } 00:26:17.960 ] 00:26:17.960 }' 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:17.960 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.894 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.894 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:18.894 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:26:18.894 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.894 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:19.153 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6a786f2d-95e1-40cf-a5b8-9f611ef4ab90 00:26:19.411 [2024-05-15 11:19:37.924598] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:19.411 [2024-05-15 11:19:37.924640] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:26:19.411 [2024-05-15 11:19:37.924650] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:19.411 [2024-05-15 11:19:37.924743] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:19.411 NewBaseBdev 00:26:19.411 [2024-05-15 11:19:37.925288] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:26:19.411 [2024-05-15 11:19:37.925310] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:26:19.411 [2024-05-15 11:19:37.925492] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:19.411 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:19.669 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:19.926 [ 00:26:19.926 { 00:26:19.926 "name": "NewBaseBdev", 00:26:19.926 "aliases": [ 00:26:19.926 "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90" 00:26:19.926 ], 00:26:19.926 "product_name": "Malloc disk", 00:26:19.926 "block_size": 512, 00:26:19.926 "num_blocks": 65536, 00:26:19.926 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:19.926 "assigned_rate_limits": { 00:26:19.926 "rw_ios_per_sec": 0, 00:26:19.926 "rw_mbytes_per_sec": 0, 00:26:19.926 "r_mbytes_per_sec": 0, 00:26:19.926 "w_mbytes_per_sec": 0 00:26:19.926 }, 00:26:19.926 "claimed": true, 00:26:19.926 "claim_type": "exclusive_write", 00:26:19.926 "zoned": false, 00:26:19.926 "supported_io_types": { 00:26:19.926 "read": true, 00:26:19.926 "write": true, 00:26:19.926 "unmap": true, 00:26:19.926 "write_zeroes": true, 00:26:19.926 "flush": true, 00:26:19.926 "reset": true, 00:26:19.926 "compare": false, 00:26:19.926 "compare_and_write": false, 00:26:19.926 "abort": true, 00:26:19.926 "nvme_admin": false, 00:26:19.926 "nvme_io": false 00:26:19.926 }, 00:26:19.926 "memory_domains": [ 00:26:19.926 { 00:26:19.926 "dma_device_id": "system", 00:26:19.926 "dma_device_type": 1 00:26:19.926 }, 00:26:19.926 { 00:26:19.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.926 "dma_device_type": 2 00:26:19.926 } 00:26:19.926 ], 00:26:19.926 "driver_specific": {} 00:26:19.926 } 00:26:19.926 ] 00:26:19.926 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:19.926 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:19.926 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:19.926 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:19.926 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.927 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.185 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:20.185 "name": "Existed_Raid", 00:26:20.185 "uuid": "41d07373-27db-4b92-b8dc-8696f819d595", 00:26:20.185 "strip_size_kb": 64, 00:26:20.185 "state": "online", 00:26:20.185 "raid_level": "raid0", 00:26:20.185 "superblock": false, 00:26:20.185 "num_base_bdevs": 3, 00:26:20.185 "num_base_bdevs_discovered": 3, 00:26:20.185 "num_base_bdevs_operational": 3, 00:26:20.185 "base_bdevs_list": [ 00:26:20.185 { 00:26:20.185 "name": "NewBaseBdev", 00:26:20.185 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:20.185 "is_configured": true, 00:26:20.185 "data_offset": 0, 00:26:20.185 "data_size": 65536 00:26:20.185 }, 00:26:20.185 { 00:26:20.185 "name": "BaseBdev2", 00:26:20.185 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:20.185 "is_configured": true, 00:26:20.185 "data_offset": 0, 00:26:20.185 "data_size": 65536 00:26:20.185 }, 00:26:20.185 { 00:26:20.185 "name": "BaseBdev3", 00:26:20.185 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:20.185 "is_configured": true, 00:26:20.185 "data_offset": 0, 00:26:20.185 "data_size": 65536 00:26:20.185 } 00:26:20.185 ] 00:26:20.185 }' 00:26:20.185 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:20.185 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:20.750 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:21.008 [2024-05-15 11:19:39.445147] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:21.008 "name": "Existed_Raid", 00:26:21.008 "aliases": [ 00:26:21.008 "41d07373-27db-4b92-b8dc-8696f819d595" 00:26:21.008 ], 00:26:21.008 "product_name": "Raid Volume", 00:26:21.008 "block_size": 512, 00:26:21.008 "num_blocks": 196608, 00:26:21.008 "uuid": "41d07373-27db-4b92-b8dc-8696f819d595", 00:26:21.008 "assigned_rate_limits": { 00:26:21.008 "rw_ios_per_sec": 0, 00:26:21.008 "rw_mbytes_per_sec": 0, 00:26:21.008 "r_mbytes_per_sec": 0, 00:26:21.008 "w_mbytes_per_sec": 0 00:26:21.008 }, 00:26:21.008 "claimed": false, 00:26:21.008 "zoned": false, 00:26:21.008 "supported_io_types": { 00:26:21.008 "read": true, 00:26:21.008 "write": true, 00:26:21.008 "unmap": true, 00:26:21.008 "write_zeroes": true, 00:26:21.008 "flush": true, 00:26:21.008 "reset": true, 00:26:21.008 "compare": false, 00:26:21.008 "compare_and_write": false, 00:26:21.008 "abort": false, 00:26:21.008 "nvme_admin": false, 00:26:21.008 "nvme_io": false 00:26:21.008 }, 00:26:21.008 "memory_domains": [ 00:26:21.008 { 00:26:21.008 "dma_device_id": "system", 00:26:21.008 "dma_device_type": 1 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.008 "dma_device_type": 2 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "dma_device_id": "system", 00:26:21.008 "dma_device_type": 1 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.008 "dma_device_type": 2 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "dma_device_id": "system", 00:26:21.008 "dma_device_type": 1 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.008 "dma_device_type": 2 00:26:21.008 } 00:26:21.008 ], 00:26:21.008 "driver_specific": { 00:26:21.008 "raid": { 00:26:21.008 "uuid": "41d07373-27db-4b92-b8dc-8696f819d595", 00:26:21.008 "strip_size_kb": 64, 00:26:21.008 "state": "online", 00:26:21.008 "raid_level": "raid0", 00:26:21.008 "superblock": false, 00:26:21.008 "num_base_bdevs": 3, 00:26:21.008 "num_base_bdevs_discovered": 3, 00:26:21.008 "num_base_bdevs_operational": 3, 00:26:21.008 "base_bdevs_list": [ 00:26:21.008 { 00:26:21.008 "name": "NewBaseBdev", 00:26:21.008 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:21.008 "is_configured": true, 00:26:21.008 "data_offset": 0, 00:26:21.008 "data_size": 65536 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "name": "BaseBdev2", 00:26:21.008 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:21.008 "is_configured": true, 00:26:21.008 "data_offset": 0, 00:26:21.008 "data_size": 65536 00:26:21.008 }, 00:26:21.008 { 00:26:21.008 "name": "BaseBdev3", 00:26:21.008 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:21.008 "is_configured": true, 00:26:21.008 "data_offset": 0, 00:26:21.008 "data_size": 65536 00:26:21.008 } 00:26:21.008 ] 00:26:21.008 } 00:26:21.008 } 00:26:21.008 }' 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:26:21.008 BaseBdev2 00:26:21.008 BaseBdev3' 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:21.008 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:21.266 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:21.266 "name": "NewBaseBdev", 00:26:21.266 "aliases": [ 00:26:21.266 "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90" 00:26:21.266 ], 00:26:21.266 "product_name": "Malloc disk", 00:26:21.266 "block_size": 512, 00:26:21.266 "num_blocks": 65536, 00:26:21.266 "uuid": "6a786f2d-95e1-40cf-a5b8-9f611ef4ab90", 00:26:21.266 "assigned_rate_limits": { 00:26:21.266 "rw_ios_per_sec": 0, 00:26:21.266 "rw_mbytes_per_sec": 0, 00:26:21.266 "r_mbytes_per_sec": 0, 00:26:21.266 "w_mbytes_per_sec": 0 00:26:21.266 }, 00:26:21.266 "claimed": true, 00:26:21.266 "claim_type": "exclusive_write", 00:26:21.266 "zoned": false, 00:26:21.266 "supported_io_types": { 00:26:21.266 "read": true, 00:26:21.266 "write": true, 00:26:21.266 "unmap": true, 00:26:21.266 "write_zeroes": true, 00:26:21.266 "flush": true, 00:26:21.266 "reset": true, 00:26:21.266 "compare": false, 00:26:21.266 "compare_and_write": false, 00:26:21.266 "abort": true, 00:26:21.266 "nvme_admin": false, 00:26:21.266 "nvme_io": false 00:26:21.266 }, 00:26:21.266 "memory_domains": [ 00:26:21.266 { 00:26:21.266 "dma_device_id": "system", 00:26:21.266 "dma_device_type": 1 00:26:21.266 }, 00:26:21.266 { 00:26:21.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.266 "dma_device_type": 2 00:26:21.266 } 00:26:21.266 ], 00:26:21.266 "driver_specific": {} 00:26:21.266 }' 00:26:21.266 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:21.266 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:21.266 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:21.266 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:21.523 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:21.523 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.523 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:21.523 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:21.523 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.523 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:21.523 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:21.782 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:21.782 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:21.782 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:21.782 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:22.040 "name": "BaseBdev2", 00:26:22.040 "aliases": [ 00:26:22.040 "10e33014-cfe8-4857-b7d4-e5b041d19f58" 00:26:22.040 ], 00:26:22.040 "product_name": "Malloc disk", 00:26:22.040 "block_size": 512, 00:26:22.040 "num_blocks": 65536, 00:26:22.040 "uuid": "10e33014-cfe8-4857-b7d4-e5b041d19f58", 00:26:22.040 "assigned_rate_limits": { 00:26:22.040 "rw_ios_per_sec": 0, 00:26:22.040 "rw_mbytes_per_sec": 0, 00:26:22.040 "r_mbytes_per_sec": 0, 00:26:22.040 "w_mbytes_per_sec": 0 00:26:22.040 }, 00:26:22.040 "claimed": true, 00:26:22.040 "claim_type": "exclusive_write", 00:26:22.040 "zoned": false, 00:26:22.040 "supported_io_types": { 00:26:22.040 "read": true, 00:26:22.040 "write": true, 00:26:22.040 "unmap": true, 00:26:22.040 "write_zeroes": true, 00:26:22.040 "flush": true, 00:26:22.040 "reset": true, 00:26:22.040 "compare": false, 00:26:22.040 "compare_and_write": false, 00:26:22.040 "abort": true, 00:26:22.040 "nvme_admin": false, 00:26:22.040 "nvme_io": false 00:26:22.040 }, 00:26:22.040 "memory_domains": [ 00:26:22.040 { 00:26:22.040 "dma_device_id": "system", 00:26:22.040 "dma_device_type": 1 00:26:22.040 }, 00:26:22.040 { 00:26:22.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.040 "dma_device_type": 2 00:26:22.040 } 00:26:22.040 ], 00:26:22.040 "driver_specific": {} 00:26:22.040 }' 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:22.040 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:22.354 11:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:22.612 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:22.612 "name": "BaseBdev3", 00:26:22.612 "aliases": [ 00:26:22.612 "22df9e06-4c32-4969-a05a-6e4e8b52c311" 00:26:22.612 ], 00:26:22.612 "product_name": "Malloc disk", 00:26:22.612 "block_size": 512, 00:26:22.612 "num_blocks": 65536, 00:26:22.612 "uuid": "22df9e06-4c32-4969-a05a-6e4e8b52c311", 00:26:22.612 "assigned_rate_limits": { 00:26:22.612 "rw_ios_per_sec": 0, 00:26:22.612 "rw_mbytes_per_sec": 0, 00:26:22.612 "r_mbytes_per_sec": 0, 00:26:22.612 "w_mbytes_per_sec": 0 00:26:22.612 }, 00:26:22.612 "claimed": true, 00:26:22.612 "claim_type": "exclusive_write", 00:26:22.612 "zoned": false, 00:26:22.612 "supported_io_types": { 00:26:22.612 "read": true, 00:26:22.612 "write": true, 00:26:22.612 "unmap": true, 00:26:22.612 "write_zeroes": true, 00:26:22.612 "flush": true, 00:26:22.612 "reset": true, 00:26:22.612 "compare": false, 00:26:22.612 "compare_and_write": false, 00:26:22.612 "abort": true, 00:26:22.612 "nvme_admin": false, 00:26:22.612 "nvme_io": false 00:26:22.612 }, 00:26:22.612 "memory_domains": [ 00:26:22.612 { 00:26:22.612 "dma_device_id": "system", 00:26:22.612 "dma_device_type": 1 00:26:22.612 }, 00:26:22.612 { 00:26:22.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.612 "dma_device_type": 2 00:26:22.612 } 00:26:22.612 ], 00:26:22.612 "driver_specific": {} 00:26:22.612 }' 00:26:22.612 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:22.871 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:23.129 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:23.387 [2024-05-15 11:19:41.909343] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:23.387 [2024-05-15 11:19:41.909385] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:23.387 [2024-05-15 11:19:41.909457] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.387 [2024-05-15 11:19:41.909499] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:23.387 [2024-05-15 11:19:41.909511] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 56683 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 56683 ']' 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 56683 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 56683 00:26:23.387 killing process with pid 56683 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56683' 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 56683 00:26:23.387 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 56683 00:26:23.387 [2024-05-15 11:19:41.950182] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:23.646 [2024-05-15 11:19:42.254347] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:26:25.021 00:26:25.021 real 0m30.583s 00:26:25.021 user 0m57.367s 00:26:25.021 sys 0m3.200s 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.021 ************************************ 00:26:25.021 END TEST raid_state_function_test 00:26:25.021 ************************************ 00:26:25.021 11:19:43 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:26:25.021 11:19:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:25.021 11:19:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.021 11:19:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:25.021 ************************************ 00:26:25.021 START TEST raid_state_function_test_sb 00:26:25.021 ************************************ 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:26:25.021 Process raid pid: 57686 00:26:25.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=57686 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57686' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 57686 /var/tmp/spdk-raid.sock 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 57686 ']' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:25.021 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.332 [2024-05-15 11:19:43.695940] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:26:25.332 [2024-05-15 11:19:43.696134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.332 [2024-05-15 11:19:43.862827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.606 [2024-05-15 11:19:44.108780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.864 [2024-05-15 11:19:44.312989] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:26.123 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:26.123 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:26:26.123 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:26.381 [2024-05-15 11:19:44.768041] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:26.381 [2024-05-15 11:19:44.768150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:26.381 [2024-05-15 11:19:44.768166] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:26.381 [2024-05-15 11:19:44.768203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:26.381 [2024-05-15 11:19:44.768212] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:26.381 [2024-05-15 11:19:44.768258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.381 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.639 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.639 "name": "Existed_Raid", 00:26:26.639 "uuid": "6cc34dff-00f4-49e0-8cda-bfe79bd5f57a", 00:26:26.639 "strip_size_kb": 64, 00:26:26.639 "state": "configuring", 00:26:26.639 "raid_level": "raid0", 00:26:26.639 "superblock": true, 00:26:26.639 "num_base_bdevs": 3, 00:26:26.639 "num_base_bdevs_discovered": 0, 00:26:26.639 "num_base_bdevs_operational": 3, 00:26:26.639 "base_bdevs_list": [ 00:26:26.639 { 00:26:26.639 "name": "BaseBdev1", 00:26:26.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.639 "is_configured": false, 00:26:26.639 "data_offset": 0, 00:26:26.639 "data_size": 0 00:26:26.639 }, 00:26:26.639 { 00:26:26.639 "name": "BaseBdev2", 00:26:26.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.639 "is_configured": false, 00:26:26.639 "data_offset": 0, 00:26:26.639 "data_size": 0 00:26:26.639 }, 00:26:26.640 { 00:26:26.640 "name": "BaseBdev3", 00:26:26.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.640 "is_configured": false, 00:26:26.640 "data_offset": 0, 00:26:26.640 "data_size": 0 00:26:26.640 } 00:26:26.640 ] 00:26:26.640 }' 00:26:26.640 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.640 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.205 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:27.464 [2024-05-15 11:19:45.848040] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:27.464 [2024-05-15 11:19:45.848096] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:26:27.464 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:27.464 [2024-05-15 11:19:46.064110] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:27.464 [2024-05-15 11:19:46.064193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:27.464 [2024-05-15 11:19:46.064214] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:27.464 [2024-05-15 11:19:46.064243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:27.464 [2024-05-15 11:19:46.064253] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:27.464 [2024-05-15 11:19:46.064279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:27.464 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:27.722 BaseBdev1 00:26:27.722 [2024-05-15 11:19:46.297045] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:27.722 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.980 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:28.239 [ 00:26:28.239 { 00:26:28.239 "name": "BaseBdev1", 00:26:28.239 "aliases": [ 00:26:28.239 "be3829c8-276f-46f5-9556-24277c650322" 00:26:28.239 ], 00:26:28.239 "product_name": "Malloc disk", 00:26:28.239 "block_size": 512, 00:26:28.239 "num_blocks": 65536, 00:26:28.239 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:28.239 "assigned_rate_limits": { 00:26:28.239 "rw_ios_per_sec": 0, 00:26:28.239 "rw_mbytes_per_sec": 0, 00:26:28.239 "r_mbytes_per_sec": 0, 00:26:28.239 "w_mbytes_per_sec": 0 00:26:28.239 }, 00:26:28.239 "claimed": true, 00:26:28.239 "claim_type": "exclusive_write", 00:26:28.239 "zoned": false, 00:26:28.239 "supported_io_types": { 00:26:28.239 "read": true, 00:26:28.239 "write": true, 00:26:28.239 "unmap": true, 00:26:28.239 "write_zeroes": true, 00:26:28.239 "flush": true, 00:26:28.239 "reset": true, 00:26:28.239 "compare": false, 00:26:28.239 "compare_and_write": false, 00:26:28.239 "abort": true, 00:26:28.239 "nvme_admin": false, 00:26:28.239 "nvme_io": false 00:26:28.239 }, 00:26:28.239 "memory_domains": [ 00:26:28.239 { 00:26:28.239 "dma_device_id": "system", 00:26:28.239 "dma_device_type": 1 00:26:28.239 }, 00:26:28.239 { 00:26:28.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.239 "dma_device_type": 2 00:26:28.239 } 00:26:28.239 ], 00:26:28.239 "driver_specific": {} 00:26:28.239 } 00:26:28.239 ] 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.239 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.498 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.498 "name": "Existed_Raid", 00:26:28.498 "uuid": "9d19c5b4-bc2d-4e9e-912d-15cb512e0329", 00:26:28.498 "strip_size_kb": 64, 00:26:28.498 "state": "configuring", 00:26:28.498 "raid_level": "raid0", 00:26:28.498 "superblock": true, 00:26:28.498 "num_base_bdevs": 3, 00:26:28.498 "num_base_bdevs_discovered": 1, 00:26:28.498 "num_base_bdevs_operational": 3, 00:26:28.498 "base_bdevs_list": [ 00:26:28.498 { 00:26:28.498 "name": "BaseBdev1", 00:26:28.498 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:28.498 "is_configured": true, 00:26:28.498 "data_offset": 2048, 00:26:28.498 "data_size": 63488 00:26:28.498 }, 00:26:28.498 { 00:26:28.498 "name": "BaseBdev2", 00:26:28.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.498 "is_configured": false, 00:26:28.498 "data_offset": 0, 00:26:28.498 "data_size": 0 00:26:28.498 }, 00:26:28.498 { 00:26:28.498 "name": "BaseBdev3", 00:26:28.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.498 "is_configured": false, 00:26:28.498 "data_offset": 0, 00:26:28.498 "data_size": 0 00:26:28.498 } 00:26:28.498 ] 00:26:28.498 }' 00:26:28.498 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.498 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.064 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:29.323 [2024-05-15 11:19:47.857359] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:29.323 [2024-05-15 11:19:47.857423] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:26:29.323 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:29.581 [2024-05-15 11:19:48.153447] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:29.581 [2024-05-15 11:19:48.154937] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:29.581 [2024-05-15 11:19:48.154989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:29.581 [2024-05-15 11:19:48.155019] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:29.581 [2024-05-15 11:19:48.155049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.581 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.840 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.840 "name": "Existed_Raid", 00:26:29.840 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:29.840 "strip_size_kb": 64, 00:26:29.840 "state": "configuring", 00:26:29.840 "raid_level": "raid0", 00:26:29.840 "superblock": true, 00:26:29.840 "num_base_bdevs": 3, 00:26:29.840 "num_base_bdevs_discovered": 1, 00:26:29.840 "num_base_bdevs_operational": 3, 00:26:29.840 "base_bdevs_list": [ 00:26:29.840 { 00:26:29.840 "name": "BaseBdev1", 00:26:29.840 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:29.840 "is_configured": true, 00:26:29.840 "data_offset": 2048, 00:26:29.840 "data_size": 63488 00:26:29.840 }, 00:26:29.840 { 00:26:29.840 "name": "BaseBdev2", 00:26:29.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.840 "is_configured": false, 00:26:29.840 "data_offset": 0, 00:26:29.840 "data_size": 0 00:26:29.840 }, 00:26:29.840 { 00:26:29.840 "name": "BaseBdev3", 00:26:29.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.840 "is_configured": false, 00:26:29.840 "data_offset": 0, 00:26:29.840 "data_size": 0 00:26:29.840 } 00:26:29.840 ] 00:26:29.840 }' 00:26:29.840 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.840 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:30.775 [2024-05-15 11:19:49.339754] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:30.775 BaseBdev2 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:30.775 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:31.034 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:31.293 [ 00:26:31.293 { 00:26:31.293 "name": "BaseBdev2", 00:26:31.293 "aliases": [ 00:26:31.293 "9cd47a3c-141d-446d-aa41-1c9b0379f492" 00:26:31.293 ], 00:26:31.293 "product_name": "Malloc disk", 00:26:31.293 "block_size": 512, 00:26:31.293 "num_blocks": 65536, 00:26:31.293 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:31.293 "assigned_rate_limits": { 00:26:31.293 "rw_ios_per_sec": 0, 00:26:31.293 "rw_mbytes_per_sec": 0, 00:26:31.293 "r_mbytes_per_sec": 0, 00:26:31.293 "w_mbytes_per_sec": 0 00:26:31.293 }, 00:26:31.293 "claimed": true, 00:26:31.293 "claim_type": "exclusive_write", 00:26:31.293 "zoned": false, 00:26:31.293 "supported_io_types": { 00:26:31.293 "read": true, 00:26:31.293 "write": true, 00:26:31.293 "unmap": true, 00:26:31.293 "write_zeroes": true, 00:26:31.293 "flush": true, 00:26:31.293 "reset": true, 00:26:31.293 "compare": false, 00:26:31.293 "compare_and_write": false, 00:26:31.293 "abort": true, 00:26:31.293 "nvme_admin": false, 00:26:31.293 "nvme_io": false 00:26:31.293 }, 00:26:31.293 "memory_domains": [ 00:26:31.293 { 00:26:31.293 "dma_device_id": "system", 00:26:31.293 "dma_device_type": 1 00:26:31.293 }, 00:26:31.293 { 00:26:31.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.293 "dma_device_type": 2 00:26:31.293 } 00:26:31.293 ], 00:26:31.293 "driver_specific": {} 00:26:31.293 } 00:26:31.293 ] 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.293 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.551 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.551 "name": "Existed_Raid", 00:26:31.551 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:31.551 "strip_size_kb": 64, 00:26:31.551 "state": "configuring", 00:26:31.551 "raid_level": "raid0", 00:26:31.551 "superblock": true, 00:26:31.551 "num_base_bdevs": 3, 00:26:31.551 "num_base_bdevs_discovered": 2, 00:26:31.551 "num_base_bdevs_operational": 3, 00:26:31.551 "base_bdevs_list": [ 00:26:31.551 { 00:26:31.551 "name": "BaseBdev1", 00:26:31.551 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:31.551 "is_configured": true, 00:26:31.551 "data_offset": 2048, 00:26:31.551 "data_size": 63488 00:26:31.551 }, 00:26:31.551 { 00:26:31.551 "name": "BaseBdev2", 00:26:31.551 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:31.551 "is_configured": true, 00:26:31.551 "data_offset": 2048, 00:26:31.551 "data_size": 63488 00:26:31.551 }, 00:26:31.551 { 00:26:31.551 "name": "BaseBdev3", 00:26:31.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.551 "is_configured": false, 00:26:31.551 "data_offset": 0, 00:26:31.551 "data_size": 0 00:26:31.551 } 00:26:31.551 ] 00:26:31.551 }' 00:26:31.551 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.551 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.536 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:32.536 [2024-05-15 11:19:51.066571] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.536 BaseBdev3 00:26:32.536 [2024-05-15 11:19:51.067369] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:26:32.536 [2024-05-15 11:19:51.067399] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:32.536 [2024-05-15 11:19:51.067574] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:26:32.536 [2024-05-15 11:19:51.067977] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:26:32.536 [2024-05-15 11:19:51.068004] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:26:32.536 [2024-05-15 11:19:51.068205] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:32.536 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:32.801 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:33.060 [ 00:26:33.060 { 00:26:33.060 "name": "BaseBdev3", 00:26:33.060 "aliases": [ 00:26:33.060 "9fc6439c-09e2-4997-963f-19d428c1407f" 00:26:33.060 ], 00:26:33.060 "product_name": "Malloc disk", 00:26:33.060 "block_size": 512, 00:26:33.060 "num_blocks": 65536, 00:26:33.060 "uuid": "9fc6439c-09e2-4997-963f-19d428c1407f", 00:26:33.060 "assigned_rate_limits": { 00:26:33.060 "rw_ios_per_sec": 0, 00:26:33.060 "rw_mbytes_per_sec": 0, 00:26:33.060 "r_mbytes_per_sec": 0, 00:26:33.060 "w_mbytes_per_sec": 0 00:26:33.060 }, 00:26:33.060 "claimed": true, 00:26:33.060 "claim_type": "exclusive_write", 00:26:33.060 "zoned": false, 00:26:33.060 "supported_io_types": { 00:26:33.060 "read": true, 00:26:33.060 "write": true, 00:26:33.060 "unmap": true, 00:26:33.060 "write_zeroes": true, 00:26:33.060 "flush": true, 00:26:33.060 "reset": true, 00:26:33.060 "compare": false, 00:26:33.060 "compare_and_write": false, 00:26:33.060 "abort": true, 00:26:33.060 "nvme_admin": false, 00:26:33.060 "nvme_io": false 00:26:33.060 }, 00:26:33.060 "memory_domains": [ 00:26:33.060 { 00:26:33.060 "dma_device_id": "system", 00:26:33.060 "dma_device_type": 1 00:26:33.060 }, 00:26:33.060 { 00:26:33.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.060 "dma_device_type": 2 00:26:33.060 } 00:26:33.060 ], 00:26:33.060 "driver_specific": {} 00:26:33.060 } 00:26:33.060 ] 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.060 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.319 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.319 "name": "Existed_Raid", 00:26:33.319 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:33.319 "strip_size_kb": 64, 00:26:33.319 "state": "online", 00:26:33.319 "raid_level": "raid0", 00:26:33.319 "superblock": true, 00:26:33.319 "num_base_bdevs": 3, 00:26:33.319 "num_base_bdevs_discovered": 3, 00:26:33.319 "num_base_bdevs_operational": 3, 00:26:33.319 "base_bdevs_list": [ 00:26:33.319 { 00:26:33.319 "name": "BaseBdev1", 00:26:33.319 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:33.319 "is_configured": true, 00:26:33.319 "data_offset": 2048, 00:26:33.319 "data_size": 63488 00:26:33.319 }, 00:26:33.319 { 00:26:33.319 "name": "BaseBdev2", 00:26:33.319 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:33.319 "is_configured": true, 00:26:33.319 "data_offset": 2048, 00:26:33.319 "data_size": 63488 00:26:33.319 }, 00:26:33.319 { 00:26:33.319 "name": "BaseBdev3", 00:26:33.319 "uuid": "9fc6439c-09e2-4997-963f-19d428c1407f", 00:26:33.319 "is_configured": true, 00:26:33.319 "data_offset": 2048, 00:26:33.319 "data_size": 63488 00:26:33.319 } 00:26:33.319 ] 00:26:33.319 }' 00:26:33.319 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.319 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:33.886 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:26:33.887 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:33.887 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:34.146 [2024-05-15 11:19:52.603044] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:34.146 "name": "Existed_Raid", 00:26:34.146 "aliases": [ 00:26:34.146 "feaae9b3-9a93-4069-9e1e-052cbdd46dbb" 00:26:34.146 ], 00:26:34.146 "product_name": "Raid Volume", 00:26:34.146 "block_size": 512, 00:26:34.146 "num_blocks": 190464, 00:26:34.146 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:34.146 "assigned_rate_limits": { 00:26:34.146 "rw_ios_per_sec": 0, 00:26:34.146 "rw_mbytes_per_sec": 0, 00:26:34.146 "r_mbytes_per_sec": 0, 00:26:34.146 "w_mbytes_per_sec": 0 00:26:34.146 }, 00:26:34.146 "claimed": false, 00:26:34.146 "zoned": false, 00:26:34.146 "supported_io_types": { 00:26:34.146 "read": true, 00:26:34.146 "write": true, 00:26:34.146 "unmap": true, 00:26:34.146 "write_zeroes": true, 00:26:34.146 "flush": true, 00:26:34.146 "reset": true, 00:26:34.146 "compare": false, 00:26:34.146 "compare_and_write": false, 00:26:34.146 "abort": false, 00:26:34.146 "nvme_admin": false, 00:26:34.146 "nvme_io": false 00:26:34.146 }, 00:26:34.146 "memory_domains": [ 00:26:34.146 { 00:26:34.146 "dma_device_id": "system", 00:26:34.146 "dma_device_type": 1 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.146 "dma_device_type": 2 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "dma_device_id": "system", 00:26:34.146 "dma_device_type": 1 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.146 "dma_device_type": 2 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "dma_device_id": "system", 00:26:34.146 "dma_device_type": 1 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.146 "dma_device_type": 2 00:26:34.146 } 00:26:34.146 ], 00:26:34.146 "driver_specific": { 00:26:34.146 "raid": { 00:26:34.146 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:34.146 "strip_size_kb": 64, 00:26:34.146 "state": "online", 00:26:34.146 "raid_level": "raid0", 00:26:34.146 "superblock": true, 00:26:34.146 "num_base_bdevs": 3, 00:26:34.146 "num_base_bdevs_discovered": 3, 00:26:34.146 "num_base_bdevs_operational": 3, 00:26:34.146 "base_bdevs_list": [ 00:26:34.146 { 00:26:34.146 "name": "BaseBdev1", 00:26:34.146 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:34.146 "is_configured": true, 00:26:34.146 "data_offset": 2048, 00:26:34.146 "data_size": 63488 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "name": "BaseBdev2", 00:26:34.146 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:34.146 "is_configured": true, 00:26:34.146 "data_offset": 2048, 00:26:34.146 "data_size": 63488 00:26:34.146 }, 00:26:34.146 { 00:26:34.146 "name": "BaseBdev3", 00:26:34.146 "uuid": "9fc6439c-09e2-4997-963f-19d428c1407f", 00:26:34.146 "is_configured": true, 00:26:34.146 "data_offset": 2048, 00:26:34.146 "data_size": 63488 00:26:34.146 } 00:26:34.146 ] 00:26:34.146 } 00:26:34.146 } 00:26:34.146 }' 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:26:34.146 BaseBdev2 00:26:34.146 BaseBdev3' 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:34.146 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:34.406 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:34.406 "name": "BaseBdev1", 00:26:34.406 "aliases": [ 00:26:34.406 "be3829c8-276f-46f5-9556-24277c650322" 00:26:34.406 ], 00:26:34.406 "product_name": "Malloc disk", 00:26:34.406 "block_size": 512, 00:26:34.406 "num_blocks": 65536, 00:26:34.406 "uuid": "be3829c8-276f-46f5-9556-24277c650322", 00:26:34.406 "assigned_rate_limits": { 00:26:34.406 "rw_ios_per_sec": 0, 00:26:34.406 "rw_mbytes_per_sec": 0, 00:26:34.406 "r_mbytes_per_sec": 0, 00:26:34.406 "w_mbytes_per_sec": 0 00:26:34.406 }, 00:26:34.406 "claimed": true, 00:26:34.406 "claim_type": "exclusive_write", 00:26:34.406 "zoned": false, 00:26:34.406 "supported_io_types": { 00:26:34.406 "read": true, 00:26:34.406 "write": true, 00:26:34.406 "unmap": true, 00:26:34.406 "write_zeroes": true, 00:26:34.406 "flush": true, 00:26:34.406 "reset": true, 00:26:34.406 "compare": false, 00:26:34.406 "compare_and_write": false, 00:26:34.406 "abort": true, 00:26:34.406 "nvme_admin": false, 00:26:34.406 "nvme_io": false 00:26:34.406 }, 00:26:34.406 "memory_domains": [ 00:26:34.406 { 00:26:34.406 "dma_device_id": "system", 00:26:34.406 "dma_device_type": 1 00:26:34.406 }, 00:26:34.406 { 00:26:34.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.406 "dma_device_type": 2 00:26:34.406 } 00:26:34.406 ], 00:26:34.406 "driver_specific": {} 00:26:34.406 }' 00:26:34.406 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.406 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.406 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:34.406 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.664 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.922 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:34.922 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.922 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:34.922 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:35.179 "name": "BaseBdev2", 00:26:35.179 "aliases": [ 00:26:35.179 "9cd47a3c-141d-446d-aa41-1c9b0379f492" 00:26:35.179 ], 00:26:35.179 "product_name": "Malloc disk", 00:26:35.179 "block_size": 512, 00:26:35.179 "num_blocks": 65536, 00:26:35.179 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:35.179 "assigned_rate_limits": { 00:26:35.179 "rw_ios_per_sec": 0, 00:26:35.179 "rw_mbytes_per_sec": 0, 00:26:35.179 "r_mbytes_per_sec": 0, 00:26:35.179 "w_mbytes_per_sec": 0 00:26:35.179 }, 00:26:35.179 "claimed": true, 00:26:35.179 "claim_type": "exclusive_write", 00:26:35.179 "zoned": false, 00:26:35.179 "supported_io_types": { 00:26:35.179 "read": true, 00:26:35.179 "write": true, 00:26:35.179 "unmap": true, 00:26:35.179 "write_zeroes": true, 00:26:35.179 "flush": true, 00:26:35.179 "reset": true, 00:26:35.179 "compare": false, 00:26:35.179 "compare_and_write": false, 00:26:35.179 "abort": true, 00:26:35.179 "nvme_admin": false, 00:26:35.179 "nvme_io": false 00:26:35.179 }, 00:26:35.179 "memory_domains": [ 00:26:35.179 { 00:26:35.179 "dma_device_id": "system", 00:26:35.179 "dma_device_type": 1 00:26:35.179 }, 00:26:35.179 { 00:26:35.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.179 "dma_device_type": 2 00:26:35.179 } 00:26:35.179 ], 00:26:35.179 "driver_specific": {} 00:26:35.179 }' 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.179 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.437 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.437 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:35.437 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:35.437 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:35.437 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:35.437 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:35.437 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:35.437 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:35.437 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:35.437 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:35.694 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:35.694 "name": "BaseBdev3", 00:26:35.694 "aliases": [ 00:26:35.694 "9fc6439c-09e2-4997-963f-19d428c1407f" 00:26:35.694 ], 00:26:35.694 "product_name": "Malloc disk", 00:26:35.694 "block_size": 512, 00:26:35.694 "num_blocks": 65536, 00:26:35.694 "uuid": "9fc6439c-09e2-4997-963f-19d428c1407f", 00:26:35.694 "assigned_rate_limits": { 00:26:35.694 "rw_ios_per_sec": 0, 00:26:35.694 "rw_mbytes_per_sec": 0, 00:26:35.694 "r_mbytes_per_sec": 0, 00:26:35.694 "w_mbytes_per_sec": 0 00:26:35.694 }, 00:26:35.694 "claimed": true, 00:26:35.694 "claim_type": "exclusive_write", 00:26:35.694 "zoned": false, 00:26:35.694 "supported_io_types": { 00:26:35.694 "read": true, 00:26:35.694 "write": true, 00:26:35.694 "unmap": true, 00:26:35.694 "write_zeroes": true, 00:26:35.694 "flush": true, 00:26:35.694 "reset": true, 00:26:35.694 "compare": false, 00:26:35.694 "compare_and_write": false, 00:26:35.694 "abort": true, 00:26:35.694 "nvme_admin": false, 00:26:35.694 "nvme_io": false 00:26:35.694 }, 00:26:35.694 "memory_domains": [ 00:26:35.694 { 00:26:35.694 "dma_device_id": "system", 00:26:35.694 "dma_device_type": 1 00:26:35.695 }, 00:26:35.695 { 00:26:35.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.695 "dma_device_type": 2 00:26:35.695 } 00:26:35.695 ], 00:26:35.695 "driver_specific": {} 00:26:35.695 }' 00:26:35.695 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.695 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:35.952 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:36.211 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:36.211 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:36.211 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:36.211 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:36.211 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:36.468 [2024-05-15 11:19:54.899288] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:36.468 [2024-05-15 11:19:54.899328] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.468 [2024-05-15 11:19:54.899376] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:36.468 11:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:36.468 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.468 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.726 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:36.726 "name": "Existed_Raid", 00:26:36.726 "uuid": "feaae9b3-9a93-4069-9e1e-052cbdd46dbb", 00:26:36.726 "strip_size_kb": 64, 00:26:36.726 "state": "offline", 00:26:36.726 "raid_level": "raid0", 00:26:36.726 "superblock": true, 00:26:36.726 "num_base_bdevs": 3, 00:26:36.726 "num_base_bdevs_discovered": 2, 00:26:36.726 "num_base_bdevs_operational": 2, 00:26:36.726 "base_bdevs_list": [ 00:26:36.726 { 00:26:36.726 "name": null, 00:26:36.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.726 "is_configured": false, 00:26:36.726 "data_offset": 2048, 00:26:36.726 "data_size": 63488 00:26:36.726 }, 00:26:36.726 { 00:26:36.726 "name": "BaseBdev2", 00:26:36.726 "uuid": "9cd47a3c-141d-446d-aa41-1c9b0379f492", 00:26:36.726 "is_configured": true, 00:26:36.726 "data_offset": 2048, 00:26:36.726 "data_size": 63488 00:26:36.726 }, 00:26:36.726 { 00:26:36.726 "name": "BaseBdev3", 00:26:36.726 "uuid": "9fc6439c-09e2-4997-963f-19d428c1407f", 00:26:36.726 "is_configured": true, 00:26:36.726 "data_offset": 2048, 00:26:36.726 "data_size": 63488 00:26:36.726 } 00:26:36.726 ] 00:26:36.726 }' 00:26:36.726 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:36.726 11:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.293 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:37.293 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:37.293 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.293 11:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:37.551 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:37.551 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:37.551 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:37.809 [2024-05-15 11:19:56.298712] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:37.809 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:37.809 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:37.809 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.809 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:38.066 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:38.066 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:38.066 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:38.325 [2024-05-15 11:19:56.841886] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:38.325 [2024-05-15 11:19:56.841947] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:26:38.325 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:38.325 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:38.325 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.325 11:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:38.583 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:38.841 BaseBdev2 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:38.841 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:39.099 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:39.357 [ 00:26:39.357 { 00:26:39.357 "name": "BaseBdev2", 00:26:39.357 "aliases": [ 00:26:39.357 "391e9de0-2f0b-4aa6-9792-eb758ed6729a" 00:26:39.357 ], 00:26:39.357 "product_name": "Malloc disk", 00:26:39.357 "block_size": 512, 00:26:39.357 "num_blocks": 65536, 00:26:39.357 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:39.357 "assigned_rate_limits": { 00:26:39.357 "rw_ios_per_sec": 0, 00:26:39.357 "rw_mbytes_per_sec": 0, 00:26:39.357 "r_mbytes_per_sec": 0, 00:26:39.357 "w_mbytes_per_sec": 0 00:26:39.357 }, 00:26:39.357 "claimed": false, 00:26:39.357 "zoned": false, 00:26:39.357 "supported_io_types": { 00:26:39.357 "read": true, 00:26:39.357 "write": true, 00:26:39.357 "unmap": true, 00:26:39.357 "write_zeroes": true, 00:26:39.357 "flush": true, 00:26:39.357 "reset": true, 00:26:39.357 "compare": false, 00:26:39.357 "compare_and_write": false, 00:26:39.357 "abort": true, 00:26:39.357 "nvme_admin": false, 00:26:39.357 "nvme_io": false 00:26:39.357 }, 00:26:39.357 "memory_domains": [ 00:26:39.357 { 00:26:39.357 "dma_device_id": "system", 00:26:39.357 "dma_device_type": 1 00:26:39.357 }, 00:26:39.357 { 00:26:39.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.357 "dma_device_type": 2 00:26:39.357 } 00:26:39.357 ], 00:26:39.357 "driver_specific": {} 00:26:39.357 } 00:26:39.357 ] 00:26:39.357 11:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:39.357 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:39.357 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:39.357 11:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:39.617 BaseBdev3 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:39.617 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:39.877 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:40.135 [ 00:26:40.135 { 00:26:40.135 "name": "BaseBdev3", 00:26:40.135 "aliases": [ 00:26:40.135 "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d" 00:26:40.135 ], 00:26:40.135 "product_name": "Malloc disk", 00:26:40.135 "block_size": 512, 00:26:40.135 "num_blocks": 65536, 00:26:40.135 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:40.135 "assigned_rate_limits": { 00:26:40.135 "rw_ios_per_sec": 0, 00:26:40.135 "rw_mbytes_per_sec": 0, 00:26:40.135 "r_mbytes_per_sec": 0, 00:26:40.135 "w_mbytes_per_sec": 0 00:26:40.135 }, 00:26:40.135 "claimed": false, 00:26:40.135 "zoned": false, 00:26:40.135 "supported_io_types": { 00:26:40.135 "read": true, 00:26:40.135 "write": true, 00:26:40.135 "unmap": true, 00:26:40.135 "write_zeroes": true, 00:26:40.135 "flush": true, 00:26:40.135 "reset": true, 00:26:40.135 "compare": false, 00:26:40.135 "compare_and_write": false, 00:26:40.135 "abort": true, 00:26:40.135 "nvme_admin": false, 00:26:40.135 "nvme_io": false 00:26:40.135 }, 00:26:40.135 "memory_domains": [ 00:26:40.135 { 00:26:40.135 "dma_device_id": "system", 00:26:40.135 "dma_device_type": 1 00:26:40.135 }, 00:26:40.135 { 00:26:40.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.135 "dma_device_type": 2 00:26:40.135 } 00:26:40.135 ], 00:26:40.135 "driver_specific": {} 00:26:40.135 } 00:26:40.135 ] 00:26:40.135 11:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:40.135 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:40.135 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:40.135 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:40.393 [2024-05-15 11:19:58.809841] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:40.393 [2024-05-15 11:19:58.809933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:40.393 [2024-05-15 11:19:58.809960] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:40.393 [2024-05-15 11:19:58.811367] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.393 11:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.651 11:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:40.651 "name": "Existed_Raid", 00:26:40.651 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:40.651 "strip_size_kb": 64, 00:26:40.651 "state": "configuring", 00:26:40.651 "raid_level": "raid0", 00:26:40.651 "superblock": true, 00:26:40.651 "num_base_bdevs": 3, 00:26:40.651 "num_base_bdevs_discovered": 2, 00:26:40.651 "num_base_bdevs_operational": 3, 00:26:40.651 "base_bdevs_list": [ 00:26:40.651 { 00:26:40.651 "name": "BaseBdev1", 00:26:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.651 "is_configured": false, 00:26:40.651 "data_offset": 0, 00:26:40.651 "data_size": 0 00:26:40.651 }, 00:26:40.651 { 00:26:40.651 "name": "BaseBdev2", 00:26:40.651 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:40.651 "is_configured": true, 00:26:40.651 "data_offset": 2048, 00:26:40.651 "data_size": 63488 00:26:40.651 }, 00:26:40.651 { 00:26:40.651 "name": "BaseBdev3", 00:26:40.651 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:40.651 "is_configured": true, 00:26:40.651 "data_offset": 2048, 00:26:40.651 "data_size": 63488 00:26:40.651 } 00:26:40.651 ] 00:26:40.651 }' 00:26:40.651 11:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:40.651 11:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.217 11:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:41.477 [2024-05-15 11:19:59.998014] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.477 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.735 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:41.735 "name": "Existed_Raid", 00:26:41.735 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:41.735 "strip_size_kb": 64, 00:26:41.735 "state": "configuring", 00:26:41.735 "raid_level": "raid0", 00:26:41.735 "superblock": true, 00:26:41.735 "num_base_bdevs": 3, 00:26:41.735 "num_base_bdevs_discovered": 1, 00:26:41.735 "num_base_bdevs_operational": 3, 00:26:41.735 "base_bdevs_list": [ 00:26:41.735 { 00:26:41.735 "name": "BaseBdev1", 00:26:41.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.735 "is_configured": false, 00:26:41.735 "data_offset": 0, 00:26:41.735 "data_size": 0 00:26:41.735 }, 00:26:41.735 { 00:26:41.735 "name": null, 00:26:41.735 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:41.735 "is_configured": false, 00:26:41.736 "data_offset": 2048, 00:26:41.736 "data_size": 63488 00:26:41.736 }, 00:26:41.736 { 00:26:41.736 "name": "BaseBdev3", 00:26:41.736 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:41.736 "is_configured": true, 00:26:41.736 "data_offset": 2048, 00:26:41.736 "data_size": 63488 00:26:41.736 } 00:26:41.736 ] 00:26:41.736 }' 00:26:41.736 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:41.736 11:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.670 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.670 11:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:42.670 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:26:42.670 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:42.929 [2024-05-15 11:20:01.484548] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:42.929 BaseBdev1 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:42.929 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:43.187 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:43.445 [ 00:26:43.445 { 00:26:43.445 "name": "BaseBdev1", 00:26:43.445 "aliases": [ 00:26:43.445 "2b920485-40bb-46c2-8b4c-b521b672e82e" 00:26:43.445 ], 00:26:43.445 "product_name": "Malloc disk", 00:26:43.445 "block_size": 512, 00:26:43.445 "num_blocks": 65536, 00:26:43.445 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:43.445 "assigned_rate_limits": { 00:26:43.445 "rw_ios_per_sec": 0, 00:26:43.445 "rw_mbytes_per_sec": 0, 00:26:43.445 "r_mbytes_per_sec": 0, 00:26:43.445 "w_mbytes_per_sec": 0 00:26:43.445 }, 00:26:43.445 "claimed": true, 00:26:43.445 "claim_type": "exclusive_write", 00:26:43.445 "zoned": false, 00:26:43.445 "supported_io_types": { 00:26:43.445 "read": true, 00:26:43.445 "write": true, 00:26:43.445 "unmap": true, 00:26:43.445 "write_zeroes": true, 00:26:43.445 "flush": true, 00:26:43.445 "reset": true, 00:26:43.445 "compare": false, 00:26:43.445 "compare_and_write": false, 00:26:43.445 "abort": true, 00:26:43.445 "nvme_admin": false, 00:26:43.445 "nvme_io": false 00:26:43.445 }, 00:26:43.445 "memory_domains": [ 00:26:43.445 { 00:26:43.445 "dma_device_id": "system", 00:26:43.445 "dma_device_type": 1 00:26:43.445 }, 00:26:43.445 { 00:26:43.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.445 "dma_device_type": 2 00:26:43.445 } 00:26:43.445 ], 00:26:43.445 "driver_specific": {} 00:26:43.445 } 00:26:43.445 ] 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.445 11:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.703 11:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:43.703 "name": "Existed_Raid", 00:26:43.703 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:43.703 "strip_size_kb": 64, 00:26:43.703 "state": "configuring", 00:26:43.703 "raid_level": "raid0", 00:26:43.703 "superblock": true, 00:26:43.703 "num_base_bdevs": 3, 00:26:43.703 "num_base_bdevs_discovered": 2, 00:26:43.704 "num_base_bdevs_operational": 3, 00:26:43.704 "base_bdevs_list": [ 00:26:43.704 { 00:26:43.704 "name": "BaseBdev1", 00:26:43.704 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:43.704 "is_configured": true, 00:26:43.704 "data_offset": 2048, 00:26:43.704 "data_size": 63488 00:26:43.704 }, 00:26:43.704 { 00:26:43.704 "name": null, 00:26:43.704 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:43.704 "is_configured": false, 00:26:43.704 "data_offset": 2048, 00:26:43.704 "data_size": 63488 00:26:43.704 }, 00:26:43.704 { 00:26:43.704 "name": "BaseBdev3", 00:26:43.704 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:43.704 "is_configured": true, 00:26:43.704 "data_offset": 2048, 00:26:43.704 "data_size": 63488 00:26:43.704 } 00:26:43.704 ] 00:26:43.704 }' 00:26:43.704 11:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:43.704 11:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 11:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.269 11:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:44.527 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:44.527 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:44.784 [2024-05-15 11:20:03.384957] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.784 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:45.041 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:45.041 "name": "Existed_Raid", 00:26:45.041 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:45.041 "strip_size_kb": 64, 00:26:45.041 "state": "configuring", 00:26:45.041 "raid_level": "raid0", 00:26:45.041 "superblock": true, 00:26:45.041 "num_base_bdevs": 3, 00:26:45.041 "num_base_bdevs_discovered": 1, 00:26:45.041 "num_base_bdevs_operational": 3, 00:26:45.041 "base_bdevs_list": [ 00:26:45.041 { 00:26:45.041 "name": "BaseBdev1", 00:26:45.041 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:45.041 "is_configured": true, 00:26:45.041 "data_offset": 2048, 00:26:45.041 "data_size": 63488 00:26:45.041 }, 00:26:45.041 { 00:26:45.041 "name": null, 00:26:45.041 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:45.041 "is_configured": false, 00:26:45.041 "data_offset": 2048, 00:26:45.041 "data_size": 63488 00:26:45.041 }, 00:26:45.041 { 00:26:45.041 "name": null, 00:26:45.041 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:45.041 "is_configured": false, 00:26:45.041 "data_offset": 2048, 00:26:45.041 "data_size": 63488 00:26:45.041 } 00:26:45.041 ] 00:26:45.041 }' 00:26:45.041 11:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:45.041 11:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.976 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.976 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:45.976 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:26:45.976 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:46.234 [2024-05-15 11:20:04.645166] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.234 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.493 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:46.493 "name": "Existed_Raid", 00:26:46.493 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:46.493 "strip_size_kb": 64, 00:26:46.493 "state": "configuring", 00:26:46.493 "raid_level": "raid0", 00:26:46.493 "superblock": true, 00:26:46.493 "num_base_bdevs": 3, 00:26:46.493 "num_base_bdevs_discovered": 2, 00:26:46.493 "num_base_bdevs_operational": 3, 00:26:46.493 "base_bdevs_list": [ 00:26:46.493 { 00:26:46.493 "name": "BaseBdev1", 00:26:46.493 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:46.493 "is_configured": true, 00:26:46.493 "data_offset": 2048, 00:26:46.493 "data_size": 63488 00:26:46.493 }, 00:26:46.493 { 00:26:46.493 "name": null, 00:26:46.493 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:46.493 "is_configured": false, 00:26:46.493 "data_offset": 2048, 00:26:46.493 "data_size": 63488 00:26:46.493 }, 00:26:46.493 { 00:26:46.493 "name": "BaseBdev3", 00:26:46.493 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:46.493 "is_configured": true, 00:26:46.493 "data_offset": 2048, 00:26:46.493 "data_size": 63488 00:26:46.493 } 00:26:46.493 ] 00:26:46.493 }' 00:26:46.493 11:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:46.493 11:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.059 11:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.059 11:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:47.317 11:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:26:47.317 11:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:47.576 [2024-05-15 11:20:06.085435] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.576 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.834 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:47.834 "name": "Existed_Raid", 00:26:47.834 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:47.834 "strip_size_kb": 64, 00:26:47.834 "state": "configuring", 00:26:47.834 "raid_level": "raid0", 00:26:47.834 "superblock": true, 00:26:47.834 "num_base_bdevs": 3, 00:26:47.834 "num_base_bdevs_discovered": 1, 00:26:47.834 "num_base_bdevs_operational": 3, 00:26:47.834 "base_bdevs_list": [ 00:26:47.834 { 00:26:47.834 "name": null, 00:26:47.834 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:47.834 "is_configured": false, 00:26:47.834 "data_offset": 2048, 00:26:47.834 "data_size": 63488 00:26:47.834 }, 00:26:47.834 { 00:26:47.834 "name": null, 00:26:47.834 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:47.834 "is_configured": false, 00:26:47.834 "data_offset": 2048, 00:26:47.834 "data_size": 63488 00:26:47.834 }, 00:26:47.834 { 00:26:47.834 "name": "BaseBdev3", 00:26:47.834 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:47.834 "is_configured": true, 00:26:47.834 "data_offset": 2048, 00:26:47.834 "data_size": 63488 00:26:47.834 } 00:26:47.834 ] 00:26:47.834 }' 00:26:47.834 11:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:47.834 11:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.405 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.405 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:48.664 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:26:48.664 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:48.921 [2024-05-15 11:20:07.489790] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:48.921 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.922 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.180 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:49.180 "name": "Existed_Raid", 00:26:49.180 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:49.180 "strip_size_kb": 64, 00:26:49.180 "state": "configuring", 00:26:49.180 "raid_level": "raid0", 00:26:49.180 "superblock": true, 00:26:49.180 "num_base_bdevs": 3, 00:26:49.180 "num_base_bdevs_discovered": 2, 00:26:49.180 "num_base_bdevs_operational": 3, 00:26:49.180 "base_bdevs_list": [ 00:26:49.180 { 00:26:49.180 "name": null, 00:26:49.180 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:49.180 "is_configured": false, 00:26:49.180 "data_offset": 2048, 00:26:49.180 "data_size": 63488 00:26:49.180 }, 00:26:49.180 { 00:26:49.180 "name": "BaseBdev2", 00:26:49.180 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:49.180 "is_configured": true, 00:26:49.180 "data_offset": 2048, 00:26:49.180 "data_size": 63488 00:26:49.180 }, 00:26:49.180 { 00:26:49.180 "name": "BaseBdev3", 00:26:49.180 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:49.180 "is_configured": true, 00:26:49.180 "data_offset": 2048, 00:26:49.180 "data_size": 63488 00:26:49.180 } 00:26:49.180 ] 00:26:49.180 }' 00:26:49.180 11:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:49.180 11:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.112 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.112 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:50.112 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:26:50.112 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.112 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:50.369 11:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2b920485-40bb-46c2-8b4c-b521b672e82e 00:26:50.628 NewBaseBdev 00:26:50.628 [2024-05-15 11:20:09.239045] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:50.628 [2024-05-15 11:20:09.239213] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:26:50.628 [2024-05-15 11:20:09.239229] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:50.628 [2024-05-15 11:20:09.239306] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:50.628 [2024-05-15 11:20:09.239546] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:26:50.628 [2024-05-15 11:20:09.239562] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:26:50.628 [2024-05-15 11:20:09.239656] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:50.628 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:50.886 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:51.144 [ 00:26:51.144 { 00:26:51.144 "name": "NewBaseBdev", 00:26:51.144 "aliases": [ 00:26:51.144 "2b920485-40bb-46c2-8b4c-b521b672e82e" 00:26:51.144 ], 00:26:51.144 "product_name": "Malloc disk", 00:26:51.144 "block_size": 512, 00:26:51.144 "num_blocks": 65536, 00:26:51.144 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:51.144 "assigned_rate_limits": { 00:26:51.144 "rw_ios_per_sec": 0, 00:26:51.144 "rw_mbytes_per_sec": 0, 00:26:51.144 "r_mbytes_per_sec": 0, 00:26:51.144 "w_mbytes_per_sec": 0 00:26:51.144 }, 00:26:51.144 "claimed": true, 00:26:51.144 "claim_type": "exclusive_write", 00:26:51.144 "zoned": false, 00:26:51.144 "supported_io_types": { 00:26:51.144 "read": true, 00:26:51.144 "write": true, 00:26:51.144 "unmap": true, 00:26:51.144 "write_zeroes": true, 00:26:51.144 "flush": true, 00:26:51.144 "reset": true, 00:26:51.144 "compare": false, 00:26:51.144 "compare_and_write": false, 00:26:51.144 "abort": true, 00:26:51.144 "nvme_admin": false, 00:26:51.144 "nvme_io": false 00:26:51.144 }, 00:26:51.144 "memory_domains": [ 00:26:51.144 { 00:26:51.144 "dma_device_id": "system", 00:26:51.144 "dma_device_type": 1 00:26:51.144 }, 00:26:51.144 { 00:26:51.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.144 "dma_device_type": 2 00:26:51.144 } 00:26:51.144 ], 00:26:51.144 "driver_specific": {} 00:26:51.144 } 00:26:51.144 ] 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.144 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.403 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.403 "name": "Existed_Raid", 00:26:51.403 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:51.403 "strip_size_kb": 64, 00:26:51.403 "state": "online", 00:26:51.403 "raid_level": "raid0", 00:26:51.403 "superblock": true, 00:26:51.403 "num_base_bdevs": 3, 00:26:51.403 "num_base_bdevs_discovered": 3, 00:26:51.403 "num_base_bdevs_operational": 3, 00:26:51.403 "base_bdevs_list": [ 00:26:51.403 { 00:26:51.403 "name": "NewBaseBdev", 00:26:51.403 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:51.403 "is_configured": true, 00:26:51.403 "data_offset": 2048, 00:26:51.403 "data_size": 63488 00:26:51.403 }, 00:26:51.403 { 00:26:51.403 "name": "BaseBdev2", 00:26:51.403 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:51.403 "is_configured": true, 00:26:51.403 "data_offset": 2048, 00:26:51.403 "data_size": 63488 00:26:51.403 }, 00:26:51.403 { 00:26:51.403 "name": "BaseBdev3", 00:26:51.403 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:51.403 "is_configured": true, 00:26:51.403 "data_offset": 2048, 00:26:51.403 "data_size": 63488 00:26:51.403 } 00:26:51.403 ] 00:26:51.403 }' 00:26:51.403 11:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.403 11:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:52.337 [2024-05-15 11:20:10.899598] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.337 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:52.337 "name": "Existed_Raid", 00:26:52.337 "aliases": [ 00:26:52.337 "39c805c0-933b-4d0b-9cce-98e2e3bf2c09" 00:26:52.337 ], 00:26:52.337 "product_name": "Raid Volume", 00:26:52.337 "block_size": 512, 00:26:52.337 "num_blocks": 190464, 00:26:52.337 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:52.337 "assigned_rate_limits": { 00:26:52.337 "rw_ios_per_sec": 0, 00:26:52.337 "rw_mbytes_per_sec": 0, 00:26:52.337 "r_mbytes_per_sec": 0, 00:26:52.337 "w_mbytes_per_sec": 0 00:26:52.337 }, 00:26:52.337 "claimed": false, 00:26:52.337 "zoned": false, 00:26:52.337 "supported_io_types": { 00:26:52.337 "read": true, 00:26:52.337 "write": true, 00:26:52.337 "unmap": true, 00:26:52.337 "write_zeroes": true, 00:26:52.337 "flush": true, 00:26:52.337 "reset": true, 00:26:52.337 "compare": false, 00:26:52.337 "compare_and_write": false, 00:26:52.337 "abort": false, 00:26:52.337 "nvme_admin": false, 00:26:52.337 "nvme_io": false 00:26:52.337 }, 00:26:52.337 "memory_domains": [ 00:26:52.337 { 00:26:52.337 "dma_device_id": "system", 00:26:52.337 "dma_device_type": 1 00:26:52.337 }, 00:26:52.337 { 00:26:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.337 "dma_device_type": 2 00:26:52.337 }, 00:26:52.337 { 00:26:52.337 "dma_device_id": "system", 00:26:52.337 "dma_device_type": 1 00:26:52.337 }, 00:26:52.337 { 00:26:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.337 "dma_device_type": 2 00:26:52.337 }, 00:26:52.337 { 00:26:52.337 "dma_device_id": "system", 00:26:52.337 "dma_device_type": 1 00:26:52.337 }, 00:26:52.337 { 00:26:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.337 "dma_device_type": 2 00:26:52.337 } 00:26:52.337 ], 00:26:52.337 "driver_specific": { 00:26:52.337 "raid": { 00:26:52.337 "uuid": "39c805c0-933b-4d0b-9cce-98e2e3bf2c09", 00:26:52.337 "strip_size_kb": 64, 00:26:52.337 "state": "online", 00:26:52.338 "raid_level": "raid0", 00:26:52.338 "superblock": true, 00:26:52.338 "num_base_bdevs": 3, 00:26:52.338 "num_base_bdevs_discovered": 3, 00:26:52.338 "num_base_bdevs_operational": 3, 00:26:52.338 "base_bdevs_list": [ 00:26:52.338 { 00:26:52.338 "name": "NewBaseBdev", 00:26:52.338 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:52.338 "is_configured": true, 00:26:52.338 "data_offset": 2048, 00:26:52.338 "data_size": 63488 00:26:52.338 }, 00:26:52.338 { 00:26:52.338 "name": "BaseBdev2", 00:26:52.338 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:52.338 "is_configured": true, 00:26:52.338 "data_offset": 2048, 00:26:52.338 "data_size": 63488 00:26:52.338 }, 00:26:52.338 { 00:26:52.338 "name": "BaseBdev3", 00:26:52.338 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:52.338 "is_configured": true, 00:26:52.338 "data_offset": 2048, 00:26:52.338 "data_size": 63488 00:26:52.338 } 00:26:52.338 ] 00:26:52.338 } 00:26:52.338 } 00:26:52.338 }' 00:26:52.338 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:52.596 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:26:52.596 BaseBdev2 00:26:52.596 BaseBdev3' 00:26:52.596 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:52.596 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:52.596 11:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:52.596 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:52.596 "name": "NewBaseBdev", 00:26:52.596 "aliases": [ 00:26:52.596 "2b920485-40bb-46c2-8b4c-b521b672e82e" 00:26:52.596 ], 00:26:52.596 "product_name": "Malloc disk", 00:26:52.596 "block_size": 512, 00:26:52.596 "num_blocks": 65536, 00:26:52.596 "uuid": "2b920485-40bb-46c2-8b4c-b521b672e82e", 00:26:52.596 "assigned_rate_limits": { 00:26:52.596 "rw_ios_per_sec": 0, 00:26:52.596 "rw_mbytes_per_sec": 0, 00:26:52.596 "r_mbytes_per_sec": 0, 00:26:52.596 "w_mbytes_per_sec": 0 00:26:52.596 }, 00:26:52.596 "claimed": true, 00:26:52.596 "claim_type": "exclusive_write", 00:26:52.596 "zoned": false, 00:26:52.596 "supported_io_types": { 00:26:52.596 "read": true, 00:26:52.596 "write": true, 00:26:52.596 "unmap": true, 00:26:52.596 "write_zeroes": true, 00:26:52.596 "flush": true, 00:26:52.596 "reset": true, 00:26:52.596 "compare": false, 00:26:52.596 "compare_and_write": false, 00:26:52.596 "abort": true, 00:26:52.596 "nvme_admin": false, 00:26:52.596 "nvme_io": false 00:26:52.596 }, 00:26:52.596 "memory_domains": [ 00:26:52.596 { 00:26:52.596 "dma_device_id": "system", 00:26:52.596 "dma_device_type": 1 00:26:52.596 }, 00:26:52.596 { 00:26:52.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.596 "dma_device_type": 2 00:26:52.596 } 00:26:52.596 ], 00:26:52.596 "driver_specific": {} 00:26:52.596 }' 00:26:52.596 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:52.854 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:53.112 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:53.369 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:53.369 "name": "BaseBdev2", 00:26:53.369 "aliases": [ 00:26:53.369 "391e9de0-2f0b-4aa6-9792-eb758ed6729a" 00:26:53.369 ], 00:26:53.369 "product_name": "Malloc disk", 00:26:53.369 "block_size": 512, 00:26:53.369 "num_blocks": 65536, 00:26:53.369 "uuid": "391e9de0-2f0b-4aa6-9792-eb758ed6729a", 00:26:53.369 "assigned_rate_limits": { 00:26:53.369 "rw_ios_per_sec": 0, 00:26:53.369 "rw_mbytes_per_sec": 0, 00:26:53.370 "r_mbytes_per_sec": 0, 00:26:53.370 "w_mbytes_per_sec": 0 00:26:53.370 }, 00:26:53.370 "claimed": true, 00:26:53.370 "claim_type": "exclusive_write", 00:26:53.370 "zoned": false, 00:26:53.370 "supported_io_types": { 00:26:53.370 "read": true, 00:26:53.370 "write": true, 00:26:53.370 "unmap": true, 00:26:53.370 "write_zeroes": true, 00:26:53.370 "flush": true, 00:26:53.370 "reset": true, 00:26:53.370 "compare": false, 00:26:53.370 "compare_and_write": false, 00:26:53.370 "abort": true, 00:26:53.370 "nvme_admin": false, 00:26:53.370 "nvme_io": false 00:26:53.370 }, 00:26:53.370 "memory_domains": [ 00:26:53.370 { 00:26:53.370 "dma_device_id": "system", 00:26:53.370 "dma_device_type": 1 00:26:53.370 }, 00:26:53.370 { 00:26:53.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.370 "dma_device_type": 2 00:26:53.370 } 00:26:53.370 ], 00:26:53.370 "driver_specific": {} 00:26:53.370 }' 00:26:53.370 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:53.370 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:53.370 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:53.370 11:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:53.628 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:53.886 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:53.886 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:53.886 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:53.886 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:53.886 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:53.886 "name": "BaseBdev3", 00:26:53.887 "aliases": [ 00:26:53.887 "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d" 00:26:53.887 ], 00:26:53.887 "product_name": "Malloc disk", 00:26:53.887 "block_size": 512, 00:26:53.887 "num_blocks": 65536, 00:26:53.887 "uuid": "70ac749c-f4e5-4e5d-a91b-9a956a4b6a9d", 00:26:53.887 "assigned_rate_limits": { 00:26:53.887 "rw_ios_per_sec": 0, 00:26:53.887 "rw_mbytes_per_sec": 0, 00:26:53.887 "r_mbytes_per_sec": 0, 00:26:53.887 "w_mbytes_per_sec": 0 00:26:53.887 }, 00:26:53.887 "claimed": true, 00:26:53.887 "claim_type": "exclusive_write", 00:26:53.887 "zoned": false, 00:26:53.887 "supported_io_types": { 00:26:53.887 "read": true, 00:26:53.887 "write": true, 00:26:53.887 "unmap": true, 00:26:53.887 "write_zeroes": true, 00:26:53.887 "flush": true, 00:26:53.887 "reset": true, 00:26:53.887 "compare": false, 00:26:53.887 "compare_and_write": false, 00:26:53.887 "abort": true, 00:26:53.887 "nvme_admin": false, 00:26:53.887 "nvme_io": false 00:26:53.887 }, 00:26:53.887 "memory_domains": [ 00:26:53.887 { 00:26:53.887 "dma_device_id": "system", 00:26:53.887 "dma_device_type": 1 00:26:53.887 }, 00:26:53.887 { 00:26:53.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.887 "dma_device_type": 2 00:26:53.887 } 00:26:53.887 ], 00:26:53.887 "driver_specific": {} 00:26:53.887 }' 00:26:53.887 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:54.145 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:54.403 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:54.403 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:54.403 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:54.403 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:54.403 11:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:54.669 [2024-05-15 11:20:13.131806] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:54.669 [2024-05-15 11:20:13.131891] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.669 [2024-05-15 11:20:13.131963] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.669 [2024-05-15 11:20:13.132017] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.670 [2024-05-15 11:20:13.132028] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 57686 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 57686 ']' 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 57686 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 57686 00:26:54.670 killing process with pid 57686 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57686' 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 57686 00:26:54.670 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 57686 00:26:54.670 [2024-05-15 11:20:13.169649] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:54.928 [2024-05-15 11:20:13.409901] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:56.303 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:26:56.303 ************************************ 00:26:56.303 END TEST raid_state_function_test_sb 00:26:56.303 ************************************ 00:26:56.303 00:26:56.303 real 0m31.078s 00:26:56.303 user 0m58.525s 00:26:56.303 sys 0m3.137s 00:26:56.303 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:56.303 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.303 11:20:14 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:26:56.303 11:20:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:26:56.303 11:20:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:56.303 11:20:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:56.303 ************************************ 00:26:56.303 START TEST raid_superblock_test 00:26:56.303 ************************************ 00:26:56.303 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:56.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=58687 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 58687 /var/tmp/spdk-raid.sock 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 58687 ']' 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:56.304 11:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.304 [2024-05-15 11:20:14.839788] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:26:56.304 [2024-05-15 11:20:14.840054] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:26:56.562 [2024-05-15 11:20:15.003249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.821 [2024-05-15 11:20:15.243121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.821 [2024-05-15 11:20:15.444539] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:57.080 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:57.338 malloc1 00:26:57.338 11:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:57.596 [2024-05-15 11:20:16.090135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:57.596 [2024-05-15 11:20:16.090280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.596 [2024-05-15 11:20:16.090337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:26:57.596 [2024-05-15 11:20:16.090398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.596 [2024-05-15 11:20:16.092426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.596 [2024-05-15 11:20:16.092465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:57.596 pt1 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:57.596 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:57.854 malloc2 00:26:57.854 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:58.112 [2024-05-15 11:20:16.508551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:58.112 [2024-05-15 11:20:16.508659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.112 [2024-05-15 11:20:16.508712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:26:58.112 [2024-05-15 11:20:16.508751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.113 [2024-05-15 11:20:16.510807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.113 [2024-05-15 11:20:16.510857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:58.113 pt2 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:58.113 malloc3 00:26:58.113 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:58.371 [2024-05-15 11:20:16.918918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:58.371 [2024-05-15 11:20:16.919029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.371 [2024-05-15 11:20:16.919093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:26:58.371 [2024-05-15 11:20:16.919137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.371 pt3 00:26:58.371 [2024-05-15 11:20:16.921331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.371 [2024-05-15 11:20:16.921379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:58.371 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:58.371 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.371 11:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:58.630 [2024-05-15 11:20:17.147081] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:58.630 [2024-05-15 11:20:17.148731] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:58.630 [2024-05-15 11:20:17.148780] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:58.630 [2024-05-15 11:20:17.148926] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:26:58.630 [2024-05-15 11:20:17.148941] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:58.630 [2024-05-15 11:20:17.149048] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:58.630 [2024-05-15 11:20:17.149318] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:26:58.630 [2024-05-15 11:20:17.149332] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:26:58.630 [2024-05-15 11:20:17.149455] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.630 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.888 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:58.888 "name": "raid_bdev1", 00:26:58.888 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:26:58.888 "strip_size_kb": 64, 00:26:58.888 "state": "online", 00:26:58.888 "raid_level": "raid0", 00:26:58.888 "superblock": true, 00:26:58.888 "num_base_bdevs": 3, 00:26:58.888 "num_base_bdevs_discovered": 3, 00:26:58.888 "num_base_bdevs_operational": 3, 00:26:58.888 "base_bdevs_list": [ 00:26:58.888 { 00:26:58.888 "name": "pt1", 00:26:58.888 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:26:58.888 "is_configured": true, 00:26:58.888 "data_offset": 2048, 00:26:58.888 "data_size": 63488 00:26:58.888 }, 00:26:58.888 { 00:26:58.888 "name": "pt2", 00:26:58.888 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:26:58.888 "is_configured": true, 00:26:58.888 "data_offset": 2048, 00:26:58.888 "data_size": 63488 00:26:58.888 }, 00:26:58.888 { 00:26:58.888 "name": "pt3", 00:26:58.888 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:26:58.888 "is_configured": true, 00:26:58.888 "data_offset": 2048, 00:26:58.888 "data_size": 63488 00:26:58.889 } 00:26:58.889 ] 00:26:58.889 }' 00:26:58.889 11:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:58.889 11:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:59.455 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:59.713 [2024-05-15 11:20:18.251395] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:59.713 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:59.713 "name": "raid_bdev1", 00:26:59.713 "aliases": [ 00:26:59.713 "6fdc4129-d428-4bbd-b952-db732b1bb62e" 00:26:59.713 ], 00:26:59.713 "product_name": "Raid Volume", 00:26:59.713 "block_size": 512, 00:26:59.713 "num_blocks": 190464, 00:26:59.713 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:26:59.713 "assigned_rate_limits": { 00:26:59.713 "rw_ios_per_sec": 0, 00:26:59.713 "rw_mbytes_per_sec": 0, 00:26:59.713 "r_mbytes_per_sec": 0, 00:26:59.713 "w_mbytes_per_sec": 0 00:26:59.713 }, 00:26:59.713 "claimed": false, 00:26:59.713 "zoned": false, 00:26:59.713 "supported_io_types": { 00:26:59.713 "read": true, 00:26:59.713 "write": true, 00:26:59.713 "unmap": true, 00:26:59.713 "write_zeroes": true, 00:26:59.713 "flush": true, 00:26:59.713 "reset": true, 00:26:59.714 "compare": false, 00:26:59.714 "compare_and_write": false, 00:26:59.714 "abort": false, 00:26:59.714 "nvme_admin": false, 00:26:59.714 "nvme_io": false 00:26:59.714 }, 00:26:59.714 "memory_domains": [ 00:26:59.714 { 00:26:59.714 "dma_device_id": "system", 00:26:59.714 "dma_device_type": 1 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.714 "dma_device_type": 2 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "dma_device_id": "system", 00:26:59.714 "dma_device_type": 1 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.714 "dma_device_type": 2 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "dma_device_id": "system", 00:26:59.714 "dma_device_type": 1 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.714 "dma_device_type": 2 00:26:59.714 } 00:26:59.714 ], 00:26:59.714 "driver_specific": { 00:26:59.714 "raid": { 00:26:59.714 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:26:59.714 "strip_size_kb": 64, 00:26:59.714 "state": "online", 00:26:59.714 "raid_level": "raid0", 00:26:59.714 "superblock": true, 00:26:59.714 "num_base_bdevs": 3, 00:26:59.714 "num_base_bdevs_discovered": 3, 00:26:59.714 "num_base_bdevs_operational": 3, 00:26:59.714 "base_bdevs_list": [ 00:26:59.714 { 00:26:59.714 "name": "pt1", 00:26:59.714 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:26:59.714 "is_configured": true, 00:26:59.714 "data_offset": 2048, 00:26:59.714 "data_size": 63488 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "name": "pt2", 00:26:59.714 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:26:59.714 "is_configured": true, 00:26:59.714 "data_offset": 2048, 00:26:59.714 "data_size": 63488 00:26:59.714 }, 00:26:59.714 { 00:26:59.714 "name": "pt3", 00:26:59.714 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:26:59.714 "is_configured": true, 00:26:59.714 "data_offset": 2048, 00:26:59.714 "data_size": 63488 00:26:59.714 } 00:26:59.714 ] 00:26:59.714 } 00:26:59.714 } 00:26:59.714 }' 00:26:59.714 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:59.714 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:26:59.714 pt2 00:26:59.714 pt3' 00:26:59.714 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:59.714 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:59.714 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:59.972 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:59.972 "name": "pt1", 00:26:59.972 "aliases": [ 00:26:59.972 "639ddb0e-82f2-5a58-b8cd-24ef2282d884" 00:26:59.972 ], 00:26:59.972 "product_name": "passthru", 00:26:59.972 "block_size": 512, 00:26:59.972 "num_blocks": 65536, 00:26:59.972 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:26:59.972 "assigned_rate_limits": { 00:26:59.972 "rw_ios_per_sec": 0, 00:26:59.972 "rw_mbytes_per_sec": 0, 00:26:59.972 "r_mbytes_per_sec": 0, 00:26:59.972 "w_mbytes_per_sec": 0 00:26:59.972 }, 00:26:59.972 "claimed": true, 00:26:59.972 "claim_type": "exclusive_write", 00:26:59.972 "zoned": false, 00:26:59.972 "supported_io_types": { 00:26:59.972 "read": true, 00:26:59.972 "write": true, 00:26:59.972 "unmap": true, 00:26:59.972 "write_zeroes": true, 00:26:59.972 "flush": true, 00:26:59.972 "reset": true, 00:26:59.972 "compare": false, 00:26:59.972 "compare_and_write": false, 00:26:59.972 "abort": true, 00:26:59.972 "nvme_admin": false, 00:26:59.972 "nvme_io": false 00:26:59.972 }, 00:26:59.972 "memory_domains": [ 00:26:59.972 { 00:26:59.972 "dma_device_id": "system", 00:26:59.972 "dma_device_type": 1 00:26:59.972 }, 00:26:59.972 { 00:26:59.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.972 "dma_device_type": 2 00:26:59.972 } 00:26:59.972 ], 00:26:59.972 "driver_specific": { 00:26:59.972 "passthru": { 00:26:59.972 "name": "pt1", 00:26:59.972 "base_bdev_name": "malloc1" 00:26:59.972 } 00:26:59.972 } 00:26:59.972 }' 00:26:59.972 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:59.972 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:00.231 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:00.490 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:00.490 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:00.490 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:00.490 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:00.490 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:00.748 "name": "pt2", 00:27:00.748 "aliases": [ 00:27:00.748 "423aaf1a-373b-5732-8881-7fb42fbcb5d3" 00:27:00.748 ], 00:27:00.748 "product_name": "passthru", 00:27:00.748 "block_size": 512, 00:27:00.748 "num_blocks": 65536, 00:27:00.748 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:00.748 "assigned_rate_limits": { 00:27:00.748 "rw_ios_per_sec": 0, 00:27:00.748 "rw_mbytes_per_sec": 0, 00:27:00.748 "r_mbytes_per_sec": 0, 00:27:00.748 "w_mbytes_per_sec": 0 00:27:00.748 }, 00:27:00.748 "claimed": true, 00:27:00.748 "claim_type": "exclusive_write", 00:27:00.748 "zoned": false, 00:27:00.748 "supported_io_types": { 00:27:00.748 "read": true, 00:27:00.748 "write": true, 00:27:00.748 "unmap": true, 00:27:00.748 "write_zeroes": true, 00:27:00.748 "flush": true, 00:27:00.748 "reset": true, 00:27:00.748 "compare": false, 00:27:00.748 "compare_and_write": false, 00:27:00.748 "abort": true, 00:27:00.748 "nvme_admin": false, 00:27:00.748 "nvme_io": false 00:27:00.748 }, 00:27:00.748 "memory_domains": [ 00:27:00.748 { 00:27:00.748 "dma_device_id": "system", 00:27:00.748 "dma_device_type": 1 00:27:00.748 }, 00:27:00.748 { 00:27:00.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.748 "dma_device_type": 2 00:27:00.748 } 00:27:00.748 ], 00:27:00.748 "driver_specific": { 00:27:00.748 "passthru": { 00:27:00.748 "name": "pt2", 00:27:00.748 "base_bdev_name": "malloc2" 00:27:00.748 } 00:27:00.748 } 00:27:00.748 }' 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:00.748 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:01.007 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:01.266 "name": "pt3", 00:27:01.266 "aliases": [ 00:27:01.266 "4b7bf06e-8a77-5f89-83e2-6b875756afcd" 00:27:01.266 ], 00:27:01.266 "product_name": "passthru", 00:27:01.266 "block_size": 512, 00:27:01.266 "num_blocks": 65536, 00:27:01.266 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:01.266 "assigned_rate_limits": { 00:27:01.266 "rw_ios_per_sec": 0, 00:27:01.266 "rw_mbytes_per_sec": 0, 00:27:01.266 "r_mbytes_per_sec": 0, 00:27:01.266 "w_mbytes_per_sec": 0 00:27:01.266 }, 00:27:01.266 "claimed": true, 00:27:01.266 "claim_type": "exclusive_write", 00:27:01.266 "zoned": false, 00:27:01.266 "supported_io_types": { 00:27:01.266 "read": true, 00:27:01.266 "write": true, 00:27:01.266 "unmap": true, 00:27:01.266 "write_zeroes": true, 00:27:01.266 "flush": true, 00:27:01.266 "reset": true, 00:27:01.266 "compare": false, 00:27:01.266 "compare_and_write": false, 00:27:01.266 "abort": true, 00:27:01.266 "nvme_admin": false, 00:27:01.266 "nvme_io": false 00:27:01.266 }, 00:27:01.266 "memory_domains": [ 00:27:01.266 { 00:27:01.266 "dma_device_id": "system", 00:27:01.266 "dma_device_type": 1 00:27:01.266 }, 00:27:01.266 { 00:27:01.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.266 "dma_device_type": 2 00:27:01.266 } 00:27:01.266 ], 00:27:01.266 "driver_specific": { 00:27:01.266 "passthru": { 00:27:01.266 "name": "pt3", 00:27:01.266 "base_bdev_name": "malloc3" 00:27:01.266 } 00:27:01.266 } 00:27:01.266 }' 00:27:01.266 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:01.524 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:01.524 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:01.524 11:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:01.524 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:01.524 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:01.524 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:01.524 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:01.782 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:02.041 [2024-05-15 11:20:20.511793] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:02.041 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6fdc4129-d428-4bbd-b952-db732b1bb62e 00:27:02.041 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6fdc4129-d428-4bbd-b952-db732b1bb62e ']' 00:27:02.041 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.300 [2024-05-15 11:20:20.751678] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.300 [2024-05-15 11:20:20.751731] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.300 [2024-05-15 11:20:20.752021] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.300 [2024-05-15 11:20:20.752088] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.300 [2024-05-15 11:20:20.752102] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:27:02.300 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.300 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:02.558 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:02.558 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:02.558 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.558 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:02.817 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.817 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:03.074 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:03.074 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:03.333 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:03.592 [2024-05-15 11:20:22.152018] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:03.592 [2024-05-15 11:20:22.153761] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:03.592 [2024-05-15 11:20:22.153811] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:03.592 [2024-05-15 11:20:22.154064] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:03.592 [2024-05-15 11:20:22.154182] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:03.592 [2024-05-15 11:20:22.154221] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:03.592 [2024-05-15 11:20:22.154273] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:03.592 [2024-05-15 11:20:22.154286] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:27:03.592 request: 00:27:03.592 { 00:27:03.592 "name": "raid_bdev1", 00:27:03.592 "raid_level": "raid0", 00:27:03.592 "base_bdevs": [ 00:27:03.592 "malloc1", 00:27:03.592 "malloc2", 00:27:03.592 "malloc3" 00:27:03.592 ], 00:27:03.592 "strip_size_kb": 64, 00:27:03.592 "superblock": false, 00:27:03.592 "method": "bdev_raid_create", 00:27:03.592 "req_id": 1 00:27:03.592 } 00:27:03.592 Got JSON-RPC error response 00:27:03.592 response: 00:27:03.592 { 00:27:03.592 "code": -17, 00:27:03.592 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:03.592 } 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.592 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:03.850 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:03.850 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:03.850 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:04.109 [2024-05-15 11:20:22.564143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:04.109 [2024-05-15 11:20:22.564260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.109 [2024-05-15 11:20:22.564327] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:27:04.109 [2024-05-15 11:20:22.564367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.109 [2024-05-15 11:20:22.566337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.109 [2024-05-15 11:20:22.566390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:04.109 [2024-05-15 11:20:22.566517] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:04.109 [2024-05-15 11:20:22.566582] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:04.109 pt1 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.109 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.367 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:04.368 "name": "raid_bdev1", 00:27:04.368 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:27:04.368 "strip_size_kb": 64, 00:27:04.368 "state": "configuring", 00:27:04.368 "raid_level": "raid0", 00:27:04.368 "superblock": true, 00:27:04.368 "num_base_bdevs": 3, 00:27:04.368 "num_base_bdevs_discovered": 1, 00:27:04.368 "num_base_bdevs_operational": 3, 00:27:04.368 "base_bdevs_list": [ 00:27:04.368 { 00:27:04.368 "name": "pt1", 00:27:04.368 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:27:04.368 "is_configured": true, 00:27:04.368 "data_offset": 2048, 00:27:04.368 "data_size": 63488 00:27:04.368 }, 00:27:04.368 { 00:27:04.368 "name": null, 00:27:04.368 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:04.368 "is_configured": false, 00:27:04.368 "data_offset": 2048, 00:27:04.368 "data_size": 63488 00:27:04.368 }, 00:27:04.368 { 00:27:04.368 "name": null, 00:27:04.368 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:04.368 "is_configured": false, 00:27:04.368 "data_offset": 2048, 00:27:04.368 "data_size": 63488 00:27:04.368 } 00:27:04.368 ] 00:27:04.368 }' 00:27:04.368 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:04.368 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.943 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:27:04.943 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:05.201 [2024-05-15 11:20:23.704434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:05.201 [2024-05-15 11:20:23.704546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.201 [2024-05-15 11:20:23.704605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:27:05.201 [2024-05-15 11:20:23.704630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.201 [2024-05-15 11:20:23.705251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.202 [2024-05-15 11:20:23.705294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:05.202 [2024-05-15 11:20:23.705414] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:05.202 [2024-05-15 11:20:23.705450] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:05.202 pt2 00:27:05.202 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:05.464 [2024-05-15 11:20:23.936470] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.464 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.722 11:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:05.722 "name": "raid_bdev1", 00:27:05.722 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:27:05.722 "strip_size_kb": 64, 00:27:05.722 "state": "configuring", 00:27:05.722 "raid_level": "raid0", 00:27:05.722 "superblock": true, 00:27:05.722 "num_base_bdevs": 3, 00:27:05.722 "num_base_bdevs_discovered": 1, 00:27:05.722 "num_base_bdevs_operational": 3, 00:27:05.722 "base_bdevs_list": [ 00:27:05.722 { 00:27:05.722 "name": "pt1", 00:27:05.722 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:27:05.722 "is_configured": true, 00:27:05.722 "data_offset": 2048, 00:27:05.722 "data_size": 63488 00:27:05.722 }, 00:27:05.722 { 00:27:05.722 "name": null, 00:27:05.722 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:05.722 "is_configured": false, 00:27:05.722 "data_offset": 2048, 00:27:05.722 "data_size": 63488 00:27:05.722 }, 00:27:05.722 { 00:27:05.722 "name": null, 00:27:05.722 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:05.722 "is_configured": false, 00:27:05.722 "data_offset": 2048, 00:27:05.722 "data_size": 63488 00:27:05.722 } 00:27:05.722 ] 00:27:05.722 }' 00:27:05.722 11:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:05.722 11:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.288 11:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:06.288 11:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:06.288 11:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:06.546 [2024-05-15 11:20:25.132641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:06.546 [2024-05-15 11:20:25.132776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.546 [2024-05-15 11:20:25.132860] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:27:06.546 [2024-05-15 11:20:25.132905] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.546 [2024-05-15 11:20:25.133406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.546 [2024-05-15 11:20:25.133461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:06.546 [2024-05-15 11:20:25.133625] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:06.546 [2024-05-15 11:20:25.133667] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:06.546 pt2 00:27:06.546 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:06.546 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:06.546 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:06.803 [2024-05-15 11:20:25.368654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:06.803 [2024-05-15 11:20:25.368756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.803 [2024-05-15 11:20:25.368805] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:27:06.803 [2024-05-15 11:20:25.369071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.803 [2024-05-15 11:20:25.369443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.803 [2024-05-15 11:20:25.369483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:06.803 [2024-05-15 11:20:25.369586] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:27:06.803 [2024-05-15 11:20:25.369625] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:06.803 [2024-05-15 11:20:25.369726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:27:06.803 [2024-05-15 11:20:25.369739] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:06.803 [2024-05-15 11:20:25.369863] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:06.803 [2024-05-15 11:20:25.370071] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:27:06.803 [2024-05-15 11:20:25.370089] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:27:06.803 [2024-05-15 11:20:25.370187] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.803 pt3 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.803 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.061 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:07.061 "name": "raid_bdev1", 00:27:07.061 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:27:07.061 "strip_size_kb": 64, 00:27:07.061 "state": "online", 00:27:07.061 "raid_level": "raid0", 00:27:07.061 "superblock": true, 00:27:07.061 "num_base_bdevs": 3, 00:27:07.061 "num_base_bdevs_discovered": 3, 00:27:07.061 "num_base_bdevs_operational": 3, 00:27:07.061 "base_bdevs_list": [ 00:27:07.061 { 00:27:07.061 "name": "pt1", 00:27:07.061 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:27:07.061 "is_configured": true, 00:27:07.061 "data_offset": 2048, 00:27:07.061 "data_size": 63488 00:27:07.061 }, 00:27:07.061 { 00:27:07.061 "name": "pt2", 00:27:07.061 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:07.061 "is_configured": true, 00:27:07.061 "data_offset": 2048, 00:27:07.061 "data_size": 63488 00:27:07.061 }, 00:27:07.061 { 00:27:07.061 "name": "pt3", 00:27:07.061 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:07.061 "is_configured": true, 00:27:07.061 "data_offset": 2048, 00:27:07.061 "data_size": 63488 00:27:07.061 } 00:27:07.061 ] 00:27:07.061 }' 00:27:07.061 11:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:07.061 11:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:07.992 [2024-05-15 11:20:26.569183] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:07.992 "name": "raid_bdev1", 00:27:07.992 "aliases": [ 00:27:07.992 "6fdc4129-d428-4bbd-b952-db732b1bb62e" 00:27:07.992 ], 00:27:07.992 "product_name": "Raid Volume", 00:27:07.992 "block_size": 512, 00:27:07.992 "num_blocks": 190464, 00:27:07.992 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:27:07.992 "assigned_rate_limits": { 00:27:07.992 "rw_ios_per_sec": 0, 00:27:07.992 "rw_mbytes_per_sec": 0, 00:27:07.992 "r_mbytes_per_sec": 0, 00:27:07.992 "w_mbytes_per_sec": 0 00:27:07.992 }, 00:27:07.992 "claimed": false, 00:27:07.992 "zoned": false, 00:27:07.992 "supported_io_types": { 00:27:07.992 "read": true, 00:27:07.992 "write": true, 00:27:07.992 "unmap": true, 00:27:07.992 "write_zeroes": true, 00:27:07.992 "flush": true, 00:27:07.992 "reset": true, 00:27:07.992 "compare": false, 00:27:07.992 "compare_and_write": false, 00:27:07.992 "abort": false, 00:27:07.992 "nvme_admin": false, 00:27:07.992 "nvme_io": false 00:27:07.992 }, 00:27:07.992 "memory_domains": [ 00:27:07.992 { 00:27:07.992 "dma_device_id": "system", 00:27:07.992 "dma_device_type": 1 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.992 "dma_device_type": 2 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "dma_device_id": "system", 00:27:07.992 "dma_device_type": 1 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.992 "dma_device_type": 2 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "dma_device_id": "system", 00:27:07.992 "dma_device_type": 1 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.992 "dma_device_type": 2 00:27:07.992 } 00:27:07.992 ], 00:27:07.992 "driver_specific": { 00:27:07.992 "raid": { 00:27:07.992 "uuid": "6fdc4129-d428-4bbd-b952-db732b1bb62e", 00:27:07.992 "strip_size_kb": 64, 00:27:07.992 "state": "online", 00:27:07.992 "raid_level": "raid0", 00:27:07.992 "superblock": true, 00:27:07.992 "num_base_bdevs": 3, 00:27:07.992 "num_base_bdevs_discovered": 3, 00:27:07.992 "num_base_bdevs_operational": 3, 00:27:07.992 "base_bdevs_list": [ 00:27:07.992 { 00:27:07.992 "name": "pt1", 00:27:07.992 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:27:07.992 "is_configured": true, 00:27:07.992 "data_offset": 2048, 00:27:07.992 "data_size": 63488 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "name": "pt2", 00:27:07.992 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:07.992 "is_configured": true, 00:27:07.992 "data_offset": 2048, 00:27:07.992 "data_size": 63488 00:27:07.992 }, 00:27:07.992 { 00:27:07.992 "name": "pt3", 00:27:07.992 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:07.992 "is_configured": true, 00:27:07.992 "data_offset": 2048, 00:27:07.992 "data_size": 63488 00:27:07.992 } 00:27:07.992 ] 00:27:07.992 } 00:27:07.992 } 00:27:07.992 }' 00:27:07.992 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:08.249 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:08.249 pt2 00:27:08.249 pt3' 00:27:08.249 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:08.249 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:08.249 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:08.506 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:08.506 "name": "pt1", 00:27:08.506 "aliases": [ 00:27:08.506 "639ddb0e-82f2-5a58-b8cd-24ef2282d884" 00:27:08.506 ], 00:27:08.506 "product_name": "passthru", 00:27:08.506 "block_size": 512, 00:27:08.506 "num_blocks": 65536, 00:27:08.506 "uuid": "639ddb0e-82f2-5a58-b8cd-24ef2282d884", 00:27:08.506 "assigned_rate_limits": { 00:27:08.506 "rw_ios_per_sec": 0, 00:27:08.506 "rw_mbytes_per_sec": 0, 00:27:08.506 "r_mbytes_per_sec": 0, 00:27:08.506 "w_mbytes_per_sec": 0 00:27:08.506 }, 00:27:08.506 "claimed": true, 00:27:08.506 "claim_type": "exclusive_write", 00:27:08.506 "zoned": false, 00:27:08.506 "supported_io_types": { 00:27:08.506 "read": true, 00:27:08.506 "write": true, 00:27:08.506 "unmap": true, 00:27:08.506 "write_zeroes": true, 00:27:08.506 "flush": true, 00:27:08.506 "reset": true, 00:27:08.506 "compare": false, 00:27:08.506 "compare_and_write": false, 00:27:08.506 "abort": true, 00:27:08.506 "nvme_admin": false, 00:27:08.506 "nvme_io": false 00:27:08.506 }, 00:27:08.506 "memory_domains": [ 00:27:08.506 { 00:27:08.506 "dma_device_id": "system", 00:27:08.506 "dma_device_type": 1 00:27:08.506 }, 00:27:08.506 { 00:27:08.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.506 "dma_device_type": 2 00:27:08.506 } 00:27:08.506 ], 00:27:08.506 "driver_specific": { 00:27:08.506 "passthru": { 00:27:08.506 "name": "pt1", 00:27:08.506 "base_bdev_name": "malloc1" 00:27:08.506 } 00:27:08.506 } 00:27:08.506 }' 00:27:08.506 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:08.506 11:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:08.506 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:08.506 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:08.506 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:08.506 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:08.506 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:08.775 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:09.051 "name": "pt2", 00:27:09.051 "aliases": [ 00:27:09.051 "423aaf1a-373b-5732-8881-7fb42fbcb5d3" 00:27:09.051 ], 00:27:09.051 "product_name": "passthru", 00:27:09.051 "block_size": 512, 00:27:09.051 "num_blocks": 65536, 00:27:09.051 "uuid": "423aaf1a-373b-5732-8881-7fb42fbcb5d3", 00:27:09.051 "assigned_rate_limits": { 00:27:09.051 "rw_ios_per_sec": 0, 00:27:09.051 "rw_mbytes_per_sec": 0, 00:27:09.051 "r_mbytes_per_sec": 0, 00:27:09.051 "w_mbytes_per_sec": 0 00:27:09.051 }, 00:27:09.051 "claimed": true, 00:27:09.051 "claim_type": "exclusive_write", 00:27:09.051 "zoned": false, 00:27:09.051 "supported_io_types": { 00:27:09.051 "read": true, 00:27:09.051 "write": true, 00:27:09.051 "unmap": true, 00:27:09.051 "write_zeroes": true, 00:27:09.051 "flush": true, 00:27:09.051 "reset": true, 00:27:09.051 "compare": false, 00:27:09.051 "compare_and_write": false, 00:27:09.051 "abort": true, 00:27:09.051 "nvme_admin": false, 00:27:09.051 "nvme_io": false 00:27:09.051 }, 00:27:09.051 "memory_domains": [ 00:27:09.051 { 00:27:09.051 "dma_device_id": "system", 00:27:09.051 "dma_device_type": 1 00:27:09.051 }, 00:27:09.051 { 00:27:09.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.051 "dma_device_type": 2 00:27:09.051 } 00:27:09.051 ], 00:27:09.051 "driver_specific": { 00:27:09.051 "passthru": { 00:27:09.051 "name": "pt2", 00:27:09.051 "base_bdev_name": "malloc2" 00:27:09.051 } 00:27:09.051 } 00:27:09.051 }' 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.051 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:09.309 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:09.567 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:09.567 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:09.567 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:09.567 11:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:09.567 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:09.567 "name": "pt3", 00:27:09.567 "aliases": [ 00:27:09.567 "4b7bf06e-8a77-5f89-83e2-6b875756afcd" 00:27:09.568 ], 00:27:09.568 "product_name": "passthru", 00:27:09.568 "block_size": 512, 00:27:09.568 "num_blocks": 65536, 00:27:09.568 "uuid": "4b7bf06e-8a77-5f89-83e2-6b875756afcd", 00:27:09.568 "assigned_rate_limits": { 00:27:09.568 "rw_ios_per_sec": 0, 00:27:09.568 "rw_mbytes_per_sec": 0, 00:27:09.568 "r_mbytes_per_sec": 0, 00:27:09.568 "w_mbytes_per_sec": 0 00:27:09.568 }, 00:27:09.568 "claimed": true, 00:27:09.568 "claim_type": "exclusive_write", 00:27:09.568 "zoned": false, 00:27:09.568 "supported_io_types": { 00:27:09.568 "read": true, 00:27:09.568 "write": true, 00:27:09.568 "unmap": true, 00:27:09.568 "write_zeroes": true, 00:27:09.568 "flush": true, 00:27:09.568 "reset": true, 00:27:09.568 "compare": false, 00:27:09.568 "compare_and_write": false, 00:27:09.568 "abort": true, 00:27:09.568 "nvme_admin": false, 00:27:09.568 "nvme_io": false 00:27:09.568 }, 00:27:09.568 "memory_domains": [ 00:27:09.568 { 00:27:09.568 "dma_device_id": "system", 00:27:09.568 "dma_device_type": 1 00:27:09.568 }, 00:27:09.568 { 00:27:09.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.568 "dma_device_type": 2 00:27:09.568 } 00:27:09.568 ], 00:27:09.568 "driver_specific": { 00:27:09.568 "passthru": { 00:27:09.568 "name": "pt3", 00:27:09.568 "base_bdev_name": "malloc3" 00:27:09.568 } 00:27:09.568 } 00:27:09.568 }' 00:27:09.568 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:09.826 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:10.084 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:10.342 [2024-05-15 11:20:28.905600] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6fdc4129-d428-4bbd-b952-db732b1bb62e '!=' 6fdc4129-d428-4bbd-b952-db732b1bb62e ']' 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 58687 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 58687 ']' 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 58687 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58687 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:10.342 killing process with pid 58687 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58687' 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 58687 00:27:10.342 [2024-05-15 11:20:28.947149] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:10.342 11:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 58687 00:27:10.342 [2024-05-15 11:20:28.947210] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.342 [2024-05-15 11:20:28.947252] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.342 [2024-05-15 11:20:28.947263] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:27:10.600 [2024-05-15 11:20:29.187676] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:12.033 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:27:12.033 00:27:12.033 real 0m15.823s 00:27:12.033 user 0m28.679s 00:27:12.033 sys 0m1.561s 00:27:12.033 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:12.033 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.033 ************************************ 00:27:12.033 END TEST raid_superblock_test 00:27:12.033 ************************************ 00:27:12.033 11:20:30 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:27:12.033 11:20:30 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:27:12.033 11:20:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:12.033 11:20:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:12.033 11:20:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:12.033 ************************************ 00:27:12.033 START TEST raid_state_function_test 00:27:12.033 ************************************ 00:27:12.033 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:27:12.033 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=59179 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 59179' 00:27:12.034 Process raid pid: 59179 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 59179 /var/tmp/spdk-raid.sock 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 59179 ']' 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.034 11:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.292 [2024-05-15 11:20:30.701429] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:27:12.292 [2024-05-15 11:20:30.701627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.292 [2024-05-15 11:20:30.868826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.549 [2024-05-15 11:20:31.113957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.807 [2024-05-15 11:20:31.323724] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:13.066 [2024-05-15 11:20:31.674705] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:13.066 [2024-05-15 11:20:31.674803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:13.066 [2024-05-15 11:20:31.675019] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:13.066 [2024-05-15 11:20:31.675050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:13.066 [2024-05-15 11:20:31.675061] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:13.066 [2024-05-15 11:20:31.675107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.066 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.324 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:13.324 "name": "Existed_Raid", 00:27:13.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.324 "strip_size_kb": 64, 00:27:13.324 "state": "configuring", 00:27:13.324 "raid_level": "concat", 00:27:13.324 "superblock": false, 00:27:13.324 "num_base_bdevs": 3, 00:27:13.324 "num_base_bdevs_discovered": 0, 00:27:13.324 "num_base_bdevs_operational": 3, 00:27:13.324 "base_bdevs_list": [ 00:27:13.324 { 00:27:13.324 "name": "BaseBdev1", 00:27:13.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.324 "is_configured": false, 00:27:13.324 "data_offset": 0, 00:27:13.324 "data_size": 0 00:27:13.324 }, 00:27:13.324 { 00:27:13.324 "name": "BaseBdev2", 00:27:13.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.324 "is_configured": false, 00:27:13.324 "data_offset": 0, 00:27:13.324 "data_size": 0 00:27:13.324 }, 00:27:13.324 { 00:27:13.324 "name": "BaseBdev3", 00:27:13.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.324 "is_configured": false, 00:27:13.324 "data_offset": 0, 00:27:13.324 "data_size": 0 00:27:13.324 } 00:27:13.324 ] 00:27:13.324 }' 00:27:13.324 11:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:13.324 11:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.889 11:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:14.147 [2024-05-15 11:20:32.678906] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.147 [2024-05-15 11:20:32.679001] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:27:14.147 11:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:14.405 [2024-05-15 11:20:32.922967] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.405 [2024-05-15 11:20:32.923069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.405 [2024-05-15 11:20:32.923085] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.405 [2024-05-15 11:20:32.923115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.405 [2024-05-15 11:20:32.923126] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:14.405 [2024-05-15 11:20:32.923151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:14.405 11:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:14.670 BaseBdev1 00:27:14.670 [2024-05-15 11:20:33.214783] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:14.670 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:14.938 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:15.195 [ 00:27:15.195 { 00:27:15.195 "name": "BaseBdev1", 00:27:15.195 "aliases": [ 00:27:15.195 "4d9dfbf0-084a-4668-be1b-412ca601999a" 00:27:15.195 ], 00:27:15.195 "product_name": "Malloc disk", 00:27:15.195 "block_size": 512, 00:27:15.195 "num_blocks": 65536, 00:27:15.195 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:15.195 "assigned_rate_limits": { 00:27:15.195 "rw_ios_per_sec": 0, 00:27:15.195 "rw_mbytes_per_sec": 0, 00:27:15.195 "r_mbytes_per_sec": 0, 00:27:15.195 "w_mbytes_per_sec": 0 00:27:15.195 }, 00:27:15.195 "claimed": true, 00:27:15.195 "claim_type": "exclusive_write", 00:27:15.195 "zoned": false, 00:27:15.195 "supported_io_types": { 00:27:15.195 "read": true, 00:27:15.195 "write": true, 00:27:15.195 "unmap": true, 00:27:15.195 "write_zeroes": true, 00:27:15.195 "flush": true, 00:27:15.195 "reset": true, 00:27:15.195 "compare": false, 00:27:15.195 "compare_and_write": false, 00:27:15.195 "abort": true, 00:27:15.195 "nvme_admin": false, 00:27:15.195 "nvme_io": false 00:27:15.195 }, 00:27:15.195 "memory_domains": [ 00:27:15.195 { 00:27:15.195 "dma_device_id": "system", 00:27:15.195 "dma_device_type": 1 00:27:15.195 }, 00:27:15.195 { 00:27:15.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.195 "dma_device_type": 2 00:27:15.195 } 00:27:15.195 ], 00:27:15.195 "driver_specific": {} 00:27:15.195 } 00:27:15.195 ] 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.195 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.453 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:15.453 "name": "Existed_Raid", 00:27:15.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.453 "strip_size_kb": 64, 00:27:15.453 "state": "configuring", 00:27:15.453 "raid_level": "concat", 00:27:15.453 "superblock": false, 00:27:15.453 "num_base_bdevs": 3, 00:27:15.453 "num_base_bdevs_discovered": 1, 00:27:15.453 "num_base_bdevs_operational": 3, 00:27:15.453 "base_bdevs_list": [ 00:27:15.453 { 00:27:15.453 "name": "BaseBdev1", 00:27:15.453 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:15.453 "is_configured": true, 00:27:15.453 "data_offset": 0, 00:27:15.453 "data_size": 65536 00:27:15.453 }, 00:27:15.453 { 00:27:15.453 "name": "BaseBdev2", 00:27:15.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.453 "is_configured": false, 00:27:15.453 "data_offset": 0, 00:27:15.453 "data_size": 0 00:27:15.453 }, 00:27:15.453 { 00:27:15.453 "name": "BaseBdev3", 00:27:15.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.453 "is_configured": false, 00:27:15.453 "data_offset": 0, 00:27:15.453 "data_size": 0 00:27:15.453 } 00:27:15.453 ] 00:27:15.453 }' 00:27:15.453 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:15.453 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.079 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:16.336 [2024-05-15 11:20:34.859132] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:16.336 [2024-05-15 11:20:34.859194] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:27:16.336 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:16.593 [2024-05-15 11:20:35.099267] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.593 [2024-05-15 11:20:35.101032] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:16.593 [2024-05-15 11:20:35.101090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:16.593 [2024-05-15 11:20:35.101120] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:16.593 [2024-05-15 11:20:35.101162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:16.593 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:27:16.593 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:16.593 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:16.593 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:16.593 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.594 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.852 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:16.852 "name": "Existed_Raid", 00:27:16.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.852 "strip_size_kb": 64, 00:27:16.852 "state": "configuring", 00:27:16.852 "raid_level": "concat", 00:27:16.852 "superblock": false, 00:27:16.852 "num_base_bdevs": 3, 00:27:16.852 "num_base_bdevs_discovered": 1, 00:27:16.852 "num_base_bdevs_operational": 3, 00:27:16.852 "base_bdevs_list": [ 00:27:16.852 { 00:27:16.852 "name": "BaseBdev1", 00:27:16.852 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:16.852 "is_configured": true, 00:27:16.852 "data_offset": 0, 00:27:16.852 "data_size": 65536 00:27:16.852 }, 00:27:16.852 { 00:27:16.852 "name": "BaseBdev2", 00:27:16.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.852 "is_configured": false, 00:27:16.852 "data_offset": 0, 00:27:16.852 "data_size": 0 00:27:16.852 }, 00:27:16.852 { 00:27:16.852 "name": "BaseBdev3", 00:27:16.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.852 "is_configured": false, 00:27:16.852 "data_offset": 0, 00:27:16.852 "data_size": 0 00:27:16.852 } 00:27:16.852 ] 00:27:16.852 }' 00:27:16.852 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:16.852 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.425 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:17.694 [2024-05-15 11:20:36.310819] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.694 BaseBdev2 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:17.694 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:17.952 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:18.210 [ 00:27:18.210 { 00:27:18.210 "name": "BaseBdev2", 00:27:18.210 "aliases": [ 00:27:18.210 "469e6699-c6c7-4477-aa2c-941468b771ff" 00:27:18.210 ], 00:27:18.210 "product_name": "Malloc disk", 00:27:18.210 "block_size": 512, 00:27:18.210 "num_blocks": 65536, 00:27:18.210 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:18.210 "assigned_rate_limits": { 00:27:18.210 "rw_ios_per_sec": 0, 00:27:18.210 "rw_mbytes_per_sec": 0, 00:27:18.210 "r_mbytes_per_sec": 0, 00:27:18.210 "w_mbytes_per_sec": 0 00:27:18.210 }, 00:27:18.210 "claimed": true, 00:27:18.210 "claim_type": "exclusive_write", 00:27:18.210 "zoned": false, 00:27:18.210 "supported_io_types": { 00:27:18.210 "read": true, 00:27:18.210 "write": true, 00:27:18.210 "unmap": true, 00:27:18.210 "write_zeroes": true, 00:27:18.210 "flush": true, 00:27:18.210 "reset": true, 00:27:18.210 "compare": false, 00:27:18.210 "compare_and_write": false, 00:27:18.210 "abort": true, 00:27:18.210 "nvme_admin": false, 00:27:18.210 "nvme_io": false 00:27:18.210 }, 00:27:18.210 "memory_domains": [ 00:27:18.210 { 00:27:18.210 "dma_device_id": "system", 00:27:18.210 "dma_device_type": 1 00:27:18.210 }, 00:27:18.210 { 00:27:18.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.210 "dma_device_type": 2 00:27:18.210 } 00:27:18.210 ], 00:27:18.210 "driver_specific": {} 00:27:18.210 } 00:27:18.210 ] 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.210 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.468 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:18.468 "name": "Existed_Raid", 00:27:18.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.468 "strip_size_kb": 64, 00:27:18.468 "state": "configuring", 00:27:18.468 "raid_level": "concat", 00:27:18.468 "superblock": false, 00:27:18.468 "num_base_bdevs": 3, 00:27:18.468 "num_base_bdevs_discovered": 2, 00:27:18.468 "num_base_bdevs_operational": 3, 00:27:18.468 "base_bdevs_list": [ 00:27:18.468 { 00:27:18.468 "name": "BaseBdev1", 00:27:18.468 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:18.468 "is_configured": true, 00:27:18.468 "data_offset": 0, 00:27:18.468 "data_size": 65536 00:27:18.468 }, 00:27:18.468 { 00:27:18.468 "name": "BaseBdev2", 00:27:18.468 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:18.468 "is_configured": true, 00:27:18.468 "data_offset": 0, 00:27:18.468 "data_size": 65536 00:27:18.468 }, 00:27:18.468 { 00:27:18.468 "name": "BaseBdev3", 00:27:18.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.468 "is_configured": false, 00:27:18.468 "data_offset": 0, 00:27:18.468 "data_size": 0 00:27:18.468 } 00:27:18.468 ] 00:27:18.468 }' 00:27:18.468 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:18.468 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:19.402 [2024-05-15 11:20:37.933143] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:19.402 [2024-05-15 11:20:37.933188] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:27:19.402 [2024-05-15 11:20:37.933199] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:19.402 [2024-05-15 11:20:37.933312] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:27:19.402 [2024-05-15 11:20:37.933560] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:27:19.402 [2024-05-15 11:20:37.933575] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:27:19.402 [2024-05-15 11:20:37.933787] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.402 BaseBdev3 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:19.402 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:19.660 11:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:19.917 [ 00:27:19.917 { 00:27:19.917 "name": "BaseBdev3", 00:27:19.918 "aliases": [ 00:27:19.918 "d7383d24-bb8f-4e77-9899-23d8b5bc9d12" 00:27:19.918 ], 00:27:19.918 "product_name": "Malloc disk", 00:27:19.918 "block_size": 512, 00:27:19.918 "num_blocks": 65536, 00:27:19.918 "uuid": "d7383d24-bb8f-4e77-9899-23d8b5bc9d12", 00:27:19.918 "assigned_rate_limits": { 00:27:19.918 "rw_ios_per_sec": 0, 00:27:19.918 "rw_mbytes_per_sec": 0, 00:27:19.918 "r_mbytes_per_sec": 0, 00:27:19.918 "w_mbytes_per_sec": 0 00:27:19.918 }, 00:27:19.918 "claimed": true, 00:27:19.918 "claim_type": "exclusive_write", 00:27:19.918 "zoned": false, 00:27:19.918 "supported_io_types": { 00:27:19.918 "read": true, 00:27:19.918 "write": true, 00:27:19.918 "unmap": true, 00:27:19.918 "write_zeroes": true, 00:27:19.918 "flush": true, 00:27:19.918 "reset": true, 00:27:19.918 "compare": false, 00:27:19.918 "compare_and_write": false, 00:27:19.918 "abort": true, 00:27:19.918 "nvme_admin": false, 00:27:19.918 "nvme_io": false 00:27:19.918 }, 00:27:19.918 "memory_domains": [ 00:27:19.918 { 00:27:19.918 "dma_device_id": "system", 00:27:19.918 "dma_device_type": 1 00:27:19.918 }, 00:27:19.918 { 00:27:19.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.918 "dma_device_type": 2 00:27:19.918 } 00:27:19.918 ], 00:27:19.918 "driver_specific": {} 00:27:19.918 } 00:27:19.918 ] 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:19.918 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.176 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:20.176 "name": "Existed_Raid", 00:27:20.176 "uuid": "572bfb51-c0ee-4388-9be9-c1a4e9da65d3", 00:27:20.176 "strip_size_kb": 64, 00:27:20.176 "state": "online", 00:27:20.176 "raid_level": "concat", 00:27:20.176 "superblock": false, 00:27:20.176 "num_base_bdevs": 3, 00:27:20.176 "num_base_bdevs_discovered": 3, 00:27:20.176 "num_base_bdevs_operational": 3, 00:27:20.176 "base_bdevs_list": [ 00:27:20.176 { 00:27:20.176 "name": "BaseBdev1", 00:27:20.176 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:20.176 "is_configured": true, 00:27:20.176 "data_offset": 0, 00:27:20.176 "data_size": 65536 00:27:20.176 }, 00:27:20.176 { 00:27:20.176 "name": "BaseBdev2", 00:27:20.176 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:20.176 "is_configured": true, 00:27:20.176 "data_offset": 0, 00:27:20.176 "data_size": 65536 00:27:20.176 }, 00:27:20.176 { 00:27:20.176 "name": "BaseBdev3", 00:27:20.176 "uuid": "d7383d24-bb8f-4e77-9899-23d8b5bc9d12", 00:27:20.176 "is_configured": true, 00:27:20.176 "data_offset": 0, 00:27:20.176 "data_size": 65536 00:27:20.176 } 00:27:20.176 ] 00:27:20.176 }' 00:27:20.176 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:20.176 11:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:20.742 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:21.000 [2024-05-15 11:20:39.441710] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:21.000 "name": "Existed_Raid", 00:27:21.000 "aliases": [ 00:27:21.000 "572bfb51-c0ee-4388-9be9-c1a4e9da65d3" 00:27:21.000 ], 00:27:21.000 "product_name": "Raid Volume", 00:27:21.000 "block_size": 512, 00:27:21.000 "num_blocks": 196608, 00:27:21.000 "uuid": "572bfb51-c0ee-4388-9be9-c1a4e9da65d3", 00:27:21.000 "assigned_rate_limits": { 00:27:21.000 "rw_ios_per_sec": 0, 00:27:21.000 "rw_mbytes_per_sec": 0, 00:27:21.000 "r_mbytes_per_sec": 0, 00:27:21.000 "w_mbytes_per_sec": 0 00:27:21.000 }, 00:27:21.000 "claimed": false, 00:27:21.000 "zoned": false, 00:27:21.000 "supported_io_types": { 00:27:21.000 "read": true, 00:27:21.000 "write": true, 00:27:21.000 "unmap": true, 00:27:21.000 "write_zeroes": true, 00:27:21.000 "flush": true, 00:27:21.000 "reset": true, 00:27:21.000 "compare": false, 00:27:21.000 "compare_and_write": false, 00:27:21.000 "abort": false, 00:27:21.000 "nvme_admin": false, 00:27:21.000 "nvme_io": false 00:27:21.000 }, 00:27:21.000 "memory_domains": [ 00:27:21.000 { 00:27:21.000 "dma_device_id": "system", 00:27:21.000 "dma_device_type": 1 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.000 "dma_device_type": 2 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "dma_device_id": "system", 00:27:21.000 "dma_device_type": 1 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.000 "dma_device_type": 2 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "dma_device_id": "system", 00:27:21.000 "dma_device_type": 1 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.000 "dma_device_type": 2 00:27:21.000 } 00:27:21.000 ], 00:27:21.000 "driver_specific": { 00:27:21.000 "raid": { 00:27:21.000 "uuid": "572bfb51-c0ee-4388-9be9-c1a4e9da65d3", 00:27:21.000 "strip_size_kb": 64, 00:27:21.000 "state": "online", 00:27:21.000 "raid_level": "concat", 00:27:21.000 "superblock": false, 00:27:21.000 "num_base_bdevs": 3, 00:27:21.000 "num_base_bdevs_discovered": 3, 00:27:21.000 "num_base_bdevs_operational": 3, 00:27:21.000 "base_bdevs_list": [ 00:27:21.000 { 00:27:21.000 "name": "BaseBdev1", 00:27:21.000 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:21.000 "is_configured": true, 00:27:21.000 "data_offset": 0, 00:27:21.000 "data_size": 65536 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "name": "BaseBdev2", 00:27:21.000 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:21.000 "is_configured": true, 00:27:21.000 "data_offset": 0, 00:27:21.000 "data_size": 65536 00:27:21.000 }, 00:27:21.000 { 00:27:21.000 "name": "BaseBdev3", 00:27:21.000 "uuid": "d7383d24-bb8f-4e77-9899-23d8b5bc9d12", 00:27:21.000 "is_configured": true, 00:27:21.000 "data_offset": 0, 00:27:21.000 "data_size": 65536 00:27:21.000 } 00:27:21.000 ] 00:27:21.000 } 00:27:21.000 } 00:27:21.000 }' 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:27:21.000 BaseBdev2 00:27:21.000 BaseBdev3' 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:21.000 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:21.259 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:21.259 "name": "BaseBdev1", 00:27:21.259 "aliases": [ 00:27:21.259 "4d9dfbf0-084a-4668-be1b-412ca601999a" 00:27:21.259 ], 00:27:21.259 "product_name": "Malloc disk", 00:27:21.259 "block_size": 512, 00:27:21.259 "num_blocks": 65536, 00:27:21.259 "uuid": "4d9dfbf0-084a-4668-be1b-412ca601999a", 00:27:21.259 "assigned_rate_limits": { 00:27:21.259 "rw_ios_per_sec": 0, 00:27:21.259 "rw_mbytes_per_sec": 0, 00:27:21.259 "r_mbytes_per_sec": 0, 00:27:21.259 "w_mbytes_per_sec": 0 00:27:21.259 }, 00:27:21.259 "claimed": true, 00:27:21.259 "claim_type": "exclusive_write", 00:27:21.259 "zoned": false, 00:27:21.260 "supported_io_types": { 00:27:21.260 "read": true, 00:27:21.260 "write": true, 00:27:21.260 "unmap": true, 00:27:21.260 "write_zeroes": true, 00:27:21.260 "flush": true, 00:27:21.260 "reset": true, 00:27:21.260 "compare": false, 00:27:21.260 "compare_and_write": false, 00:27:21.260 "abort": true, 00:27:21.260 "nvme_admin": false, 00:27:21.260 "nvme_io": false 00:27:21.260 }, 00:27:21.260 "memory_domains": [ 00:27:21.260 { 00:27:21.260 "dma_device_id": "system", 00:27:21.260 "dma_device_type": 1 00:27:21.260 }, 00:27:21.260 { 00:27:21.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.260 "dma_device_type": 2 00:27:21.260 } 00:27:21.260 ], 00:27:21.260 "driver_specific": {} 00:27:21.260 }' 00:27:21.260 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:21.260 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:21.518 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:21.518 11:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:21.518 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:21.518 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:21.518 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:21.518 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:21.776 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:22.035 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:22.035 "name": "BaseBdev2", 00:27:22.035 "aliases": [ 00:27:22.035 "469e6699-c6c7-4477-aa2c-941468b771ff" 00:27:22.035 ], 00:27:22.035 "product_name": "Malloc disk", 00:27:22.035 "block_size": 512, 00:27:22.035 "num_blocks": 65536, 00:27:22.035 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:22.035 "assigned_rate_limits": { 00:27:22.035 "rw_ios_per_sec": 0, 00:27:22.035 "rw_mbytes_per_sec": 0, 00:27:22.035 "r_mbytes_per_sec": 0, 00:27:22.035 "w_mbytes_per_sec": 0 00:27:22.035 }, 00:27:22.035 "claimed": true, 00:27:22.035 "claim_type": "exclusive_write", 00:27:22.035 "zoned": false, 00:27:22.035 "supported_io_types": { 00:27:22.035 "read": true, 00:27:22.035 "write": true, 00:27:22.035 "unmap": true, 00:27:22.035 "write_zeroes": true, 00:27:22.035 "flush": true, 00:27:22.035 "reset": true, 00:27:22.035 "compare": false, 00:27:22.035 "compare_and_write": false, 00:27:22.035 "abort": true, 00:27:22.035 "nvme_admin": false, 00:27:22.035 "nvme_io": false 00:27:22.035 }, 00:27:22.035 "memory_domains": [ 00:27:22.035 { 00:27:22.035 "dma_device_id": "system", 00:27:22.035 "dma_device_type": 1 00:27:22.035 }, 00:27:22.035 { 00:27:22.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.035 "dma_device_type": 2 00:27:22.035 } 00:27:22.035 ], 00:27:22.035 "driver_specific": {} 00:27:22.035 }' 00:27:22.035 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:22.035 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:22.035 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:22.035 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:22.293 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:22.552 11:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:22.552 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:22.552 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:22.552 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:22.552 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:22.809 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:22.809 "name": "BaseBdev3", 00:27:22.809 "aliases": [ 00:27:22.809 "d7383d24-bb8f-4e77-9899-23d8b5bc9d12" 00:27:22.809 ], 00:27:22.809 "product_name": "Malloc disk", 00:27:22.809 "block_size": 512, 00:27:22.809 "num_blocks": 65536, 00:27:22.809 "uuid": "d7383d24-bb8f-4e77-9899-23d8b5bc9d12", 00:27:22.809 "assigned_rate_limits": { 00:27:22.809 "rw_ios_per_sec": 0, 00:27:22.809 "rw_mbytes_per_sec": 0, 00:27:22.809 "r_mbytes_per_sec": 0, 00:27:22.809 "w_mbytes_per_sec": 0 00:27:22.809 }, 00:27:22.809 "claimed": true, 00:27:22.810 "claim_type": "exclusive_write", 00:27:22.810 "zoned": false, 00:27:22.810 "supported_io_types": { 00:27:22.810 "read": true, 00:27:22.810 "write": true, 00:27:22.810 "unmap": true, 00:27:22.810 "write_zeroes": true, 00:27:22.810 "flush": true, 00:27:22.810 "reset": true, 00:27:22.810 "compare": false, 00:27:22.810 "compare_and_write": false, 00:27:22.810 "abort": true, 00:27:22.810 "nvme_admin": false, 00:27:22.810 "nvme_io": false 00:27:22.810 }, 00:27:22.810 "memory_domains": [ 00:27:22.810 { 00:27:22.810 "dma_device_id": "system", 00:27:22.810 "dma_device_type": 1 00:27:22.810 }, 00:27:22.810 { 00:27:22.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.810 "dma_device_type": 2 00:27:22.810 } 00:27:22.810 ], 00:27:22.810 "driver_specific": {} 00:27:22.810 }' 00:27:22.810 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:22.810 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:22.810 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:22.810 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:22.810 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:23.068 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:23.326 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:23.326 11:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:23.326 [2024-05-15 11:20:41.914002] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:23.326 [2024-05-15 11:20:41.914068] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:23.326 [2024-05-15 11:20:41.914123] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:23.584 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:27:23.584 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:27:23.584 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:23.584 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:23.585 "name": "Existed_Raid", 00:27:23.585 "uuid": "572bfb51-c0ee-4388-9be9-c1a4e9da65d3", 00:27:23.585 "strip_size_kb": 64, 00:27:23.585 "state": "offline", 00:27:23.585 "raid_level": "concat", 00:27:23.585 "superblock": false, 00:27:23.585 "num_base_bdevs": 3, 00:27:23.585 "num_base_bdevs_discovered": 2, 00:27:23.585 "num_base_bdevs_operational": 2, 00:27:23.585 "base_bdevs_list": [ 00:27:23.585 { 00:27:23.585 "name": null, 00:27:23.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.585 "is_configured": false, 00:27:23.585 "data_offset": 0, 00:27:23.585 "data_size": 65536 00:27:23.585 }, 00:27:23.585 { 00:27:23.585 "name": "BaseBdev2", 00:27:23.585 "uuid": "469e6699-c6c7-4477-aa2c-941468b771ff", 00:27:23.585 "is_configured": true, 00:27:23.585 "data_offset": 0, 00:27:23.585 "data_size": 65536 00:27:23.585 }, 00:27:23.585 { 00:27:23.585 "name": "BaseBdev3", 00:27:23.585 "uuid": "d7383d24-bb8f-4e77-9899-23d8b5bc9d12", 00:27:23.585 "is_configured": true, 00:27:23.585 "data_offset": 0, 00:27:23.585 "data_size": 65536 00:27:23.585 } 00:27:23.585 ] 00:27:23.585 }' 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:23.585 11:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:24.537 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:24.537 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:24.537 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.537 11:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:24.794 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:24.794 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:24.794 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:24.794 [2024-05-15 11:20:43.425206] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:25.052 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:25.052 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.052 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.052 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:25.311 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:25.311 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.311 11:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:25.311 [2024-05-15 11:20:43.945261] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:25.311 [2024-05-15 11:20:43.945341] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:27:25.569 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:25.569 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.569 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.569 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:25.826 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:26.084 BaseBdev2 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:26.084 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:26.342 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:26.600 [ 00:27:26.600 { 00:27:26.600 "name": "BaseBdev2", 00:27:26.600 "aliases": [ 00:27:26.600 "9e6210ce-0ddf-4a7a-8716-8eacb04e382f" 00:27:26.600 ], 00:27:26.600 "product_name": "Malloc disk", 00:27:26.600 "block_size": 512, 00:27:26.600 "num_blocks": 65536, 00:27:26.600 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:26.600 "assigned_rate_limits": { 00:27:26.600 "rw_ios_per_sec": 0, 00:27:26.600 "rw_mbytes_per_sec": 0, 00:27:26.600 "r_mbytes_per_sec": 0, 00:27:26.600 "w_mbytes_per_sec": 0 00:27:26.600 }, 00:27:26.600 "claimed": false, 00:27:26.600 "zoned": false, 00:27:26.600 "supported_io_types": { 00:27:26.600 "read": true, 00:27:26.600 "write": true, 00:27:26.600 "unmap": true, 00:27:26.600 "write_zeroes": true, 00:27:26.600 "flush": true, 00:27:26.600 "reset": true, 00:27:26.600 "compare": false, 00:27:26.600 "compare_and_write": false, 00:27:26.600 "abort": true, 00:27:26.600 "nvme_admin": false, 00:27:26.600 "nvme_io": false 00:27:26.600 }, 00:27:26.600 "memory_domains": [ 00:27:26.600 { 00:27:26.600 "dma_device_id": "system", 00:27:26.600 "dma_device_type": 1 00:27:26.600 }, 00:27:26.600 { 00:27:26.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.600 "dma_device_type": 2 00:27:26.600 } 00:27:26.600 ], 00:27:26.600 "driver_specific": {} 00:27:26.600 } 00:27:26.600 ] 00:27:26.600 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:26.600 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:27:26.600 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:26.600 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:26.858 BaseBdev3 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:26.858 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:27.115 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:27.115 [ 00:27:27.115 { 00:27:27.115 "name": "BaseBdev3", 00:27:27.115 "aliases": [ 00:27:27.115 "36065f83-dbc8-49c5-b34b-5e4c396e6742" 00:27:27.115 ], 00:27:27.115 "product_name": "Malloc disk", 00:27:27.115 "block_size": 512, 00:27:27.115 "num_blocks": 65536, 00:27:27.115 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:27.115 "assigned_rate_limits": { 00:27:27.115 "rw_ios_per_sec": 0, 00:27:27.115 "rw_mbytes_per_sec": 0, 00:27:27.115 "r_mbytes_per_sec": 0, 00:27:27.115 "w_mbytes_per_sec": 0 00:27:27.115 }, 00:27:27.115 "claimed": false, 00:27:27.115 "zoned": false, 00:27:27.115 "supported_io_types": { 00:27:27.115 "read": true, 00:27:27.115 "write": true, 00:27:27.115 "unmap": true, 00:27:27.115 "write_zeroes": true, 00:27:27.115 "flush": true, 00:27:27.115 "reset": true, 00:27:27.116 "compare": false, 00:27:27.116 "compare_and_write": false, 00:27:27.116 "abort": true, 00:27:27.116 "nvme_admin": false, 00:27:27.116 "nvme_io": false 00:27:27.116 }, 00:27:27.116 "memory_domains": [ 00:27:27.116 { 00:27:27.116 "dma_device_id": "system", 00:27:27.116 "dma_device_type": 1 00:27:27.116 }, 00:27:27.116 { 00:27:27.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.116 "dma_device_type": 2 00:27:27.116 } 00:27:27.116 ], 00:27:27.116 "driver_specific": {} 00:27:27.116 } 00:27:27.116 ] 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:27.373 [2024-05-15 11:20:45.944404] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:27.373 [2024-05-15 11:20:45.944496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:27.373 [2024-05-15 11:20:45.944524] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:27.373 [2024-05-15 11:20:45.946184] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.373 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.631 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:27.631 "name": "Existed_Raid", 00:27:27.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.631 "strip_size_kb": 64, 00:27:27.631 "state": "configuring", 00:27:27.631 "raid_level": "concat", 00:27:27.631 "superblock": false, 00:27:27.631 "num_base_bdevs": 3, 00:27:27.631 "num_base_bdevs_discovered": 2, 00:27:27.631 "num_base_bdevs_operational": 3, 00:27:27.631 "base_bdevs_list": [ 00:27:27.631 { 00:27:27.631 "name": "BaseBdev1", 00:27:27.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.631 "is_configured": false, 00:27:27.631 "data_offset": 0, 00:27:27.631 "data_size": 0 00:27:27.631 }, 00:27:27.631 { 00:27:27.631 "name": "BaseBdev2", 00:27:27.631 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:27.631 "is_configured": true, 00:27:27.631 "data_offset": 0, 00:27:27.631 "data_size": 65536 00:27:27.631 }, 00:27:27.631 { 00:27:27.631 "name": "BaseBdev3", 00:27:27.631 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:27.631 "is_configured": true, 00:27:27.631 "data_offset": 0, 00:27:27.631 "data_size": 65536 00:27:27.631 } 00:27:27.631 ] 00:27:27.631 }' 00:27:27.631 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:27.631 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.565 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:28.565 [2024-05-15 11:20:47.180600] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.565 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.824 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:28.824 "name": "Existed_Raid", 00:27:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.824 "strip_size_kb": 64, 00:27:28.824 "state": "configuring", 00:27:28.824 "raid_level": "concat", 00:27:28.824 "superblock": false, 00:27:28.824 "num_base_bdevs": 3, 00:27:28.824 "num_base_bdevs_discovered": 1, 00:27:28.824 "num_base_bdevs_operational": 3, 00:27:28.824 "base_bdevs_list": [ 00:27:28.824 { 00:27:28.824 "name": "BaseBdev1", 00:27:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.824 "is_configured": false, 00:27:28.824 "data_offset": 0, 00:27:28.824 "data_size": 0 00:27:28.824 }, 00:27:28.824 { 00:27:28.824 "name": null, 00:27:28.824 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:28.824 "is_configured": false, 00:27:28.824 "data_offset": 0, 00:27:28.824 "data_size": 65536 00:27:28.824 }, 00:27:28.824 { 00:27:28.824 "name": "BaseBdev3", 00:27:28.824 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:28.824 "is_configured": true, 00:27:28.824 "data_offset": 0, 00:27:28.824 "data_size": 65536 00:27:28.824 } 00:27:28.824 ] 00:27:28.824 }' 00:27:28.824 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:28.824 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.757 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.757 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:29.757 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:27:29.757 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:30.014 BaseBdev1 00:27:30.014 [2024-05-15 11:20:48.613129] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:30.014 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:30.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:30.530 [ 00:27:30.530 { 00:27:30.530 "name": "BaseBdev1", 00:27:30.530 "aliases": [ 00:27:30.530 "307d2636-b8a3-4bfe-9885-6c3ec8085e57" 00:27:30.530 ], 00:27:30.530 "product_name": "Malloc disk", 00:27:30.530 "block_size": 512, 00:27:30.530 "num_blocks": 65536, 00:27:30.530 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:30.530 "assigned_rate_limits": { 00:27:30.530 "rw_ios_per_sec": 0, 00:27:30.530 "rw_mbytes_per_sec": 0, 00:27:30.530 "r_mbytes_per_sec": 0, 00:27:30.530 "w_mbytes_per_sec": 0 00:27:30.530 }, 00:27:30.530 "claimed": true, 00:27:30.530 "claim_type": "exclusive_write", 00:27:30.530 "zoned": false, 00:27:30.530 "supported_io_types": { 00:27:30.530 "read": true, 00:27:30.530 "write": true, 00:27:30.530 "unmap": true, 00:27:30.530 "write_zeroes": true, 00:27:30.530 "flush": true, 00:27:30.530 "reset": true, 00:27:30.530 "compare": false, 00:27:30.530 "compare_and_write": false, 00:27:30.530 "abort": true, 00:27:30.530 "nvme_admin": false, 00:27:30.530 "nvme_io": false 00:27:30.530 }, 00:27:30.530 "memory_domains": [ 00:27:30.530 { 00:27:30.530 "dma_device_id": "system", 00:27:30.530 "dma_device_type": 1 00:27:30.530 }, 00:27:30.530 { 00:27:30.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.530 "dma_device_type": 2 00:27:30.530 } 00:27:30.530 ], 00:27:30.530 "driver_specific": {} 00:27:30.530 } 00:27:30.530 ] 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.530 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.795 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:30.795 "name": "Existed_Raid", 00:27:30.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.795 "strip_size_kb": 64, 00:27:30.795 "state": "configuring", 00:27:30.795 "raid_level": "concat", 00:27:30.795 "superblock": false, 00:27:30.795 "num_base_bdevs": 3, 00:27:30.795 "num_base_bdevs_discovered": 2, 00:27:30.795 "num_base_bdevs_operational": 3, 00:27:30.795 "base_bdevs_list": [ 00:27:30.795 { 00:27:30.795 "name": "BaseBdev1", 00:27:30.795 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:30.795 "is_configured": true, 00:27:30.795 "data_offset": 0, 00:27:30.795 "data_size": 65536 00:27:30.795 }, 00:27:30.795 { 00:27:30.795 "name": null, 00:27:30.795 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:30.795 "is_configured": false, 00:27:30.795 "data_offset": 0, 00:27:30.795 "data_size": 65536 00:27:30.795 }, 00:27:30.795 { 00:27:30.795 "name": "BaseBdev3", 00:27:30.795 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:30.795 "is_configured": true, 00:27:30.795 "data_offset": 0, 00:27:30.795 "data_size": 65536 00:27:30.795 } 00:27:30.795 ] 00:27:30.795 }' 00:27:30.795 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:30.795 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.380 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.380 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:31.638 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:31.638 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:31.896 [2024-05-15 11:20:50.465588] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.896 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.154 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:32.154 "name": "Existed_Raid", 00:27:32.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.154 "strip_size_kb": 64, 00:27:32.154 "state": "configuring", 00:27:32.154 "raid_level": "concat", 00:27:32.154 "superblock": false, 00:27:32.154 "num_base_bdevs": 3, 00:27:32.154 "num_base_bdevs_discovered": 1, 00:27:32.154 "num_base_bdevs_operational": 3, 00:27:32.154 "base_bdevs_list": [ 00:27:32.154 { 00:27:32.154 "name": "BaseBdev1", 00:27:32.154 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:32.154 "is_configured": true, 00:27:32.154 "data_offset": 0, 00:27:32.154 "data_size": 65536 00:27:32.154 }, 00:27:32.154 { 00:27:32.154 "name": null, 00:27:32.154 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:32.154 "is_configured": false, 00:27:32.154 "data_offset": 0, 00:27:32.154 "data_size": 65536 00:27:32.154 }, 00:27:32.154 { 00:27:32.154 "name": null, 00:27:32.154 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:32.154 "is_configured": false, 00:27:32.154 "data_offset": 0, 00:27:32.154 "data_size": 65536 00:27:32.154 } 00:27:32.154 ] 00:27:32.154 }' 00:27:32.154 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:32.154 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.086 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.086 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:33.086 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:27:33.086 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:33.343 [2024-05-15 11:20:51.841928] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:33.343 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:33.344 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.344 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.601 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:33.601 "name": "Existed_Raid", 00:27:33.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.602 "strip_size_kb": 64, 00:27:33.602 "state": "configuring", 00:27:33.602 "raid_level": "concat", 00:27:33.602 "superblock": false, 00:27:33.602 "num_base_bdevs": 3, 00:27:33.602 "num_base_bdevs_discovered": 2, 00:27:33.602 "num_base_bdevs_operational": 3, 00:27:33.602 "base_bdevs_list": [ 00:27:33.602 { 00:27:33.602 "name": "BaseBdev1", 00:27:33.602 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:33.602 "is_configured": true, 00:27:33.602 "data_offset": 0, 00:27:33.602 "data_size": 65536 00:27:33.602 }, 00:27:33.602 { 00:27:33.602 "name": null, 00:27:33.602 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:33.602 "is_configured": false, 00:27:33.602 "data_offset": 0, 00:27:33.602 "data_size": 65536 00:27:33.602 }, 00:27:33.602 { 00:27:33.602 "name": "BaseBdev3", 00:27:33.602 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:33.602 "is_configured": true, 00:27:33.602 "data_offset": 0, 00:27:33.602 "data_size": 65536 00:27:33.602 } 00:27:33.602 ] 00:27:33.602 }' 00:27:33.602 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:33.602 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.166 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.166 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:34.424 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:27:34.424 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:34.685 [2024-05-15 11:20:53.270097] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.943 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.200 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:35.200 "name": "Existed_Raid", 00:27:35.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.200 "strip_size_kb": 64, 00:27:35.200 "state": "configuring", 00:27:35.200 "raid_level": "concat", 00:27:35.201 "superblock": false, 00:27:35.201 "num_base_bdevs": 3, 00:27:35.201 "num_base_bdevs_discovered": 1, 00:27:35.201 "num_base_bdevs_operational": 3, 00:27:35.201 "base_bdevs_list": [ 00:27:35.201 { 00:27:35.201 "name": null, 00:27:35.201 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:35.201 "is_configured": false, 00:27:35.201 "data_offset": 0, 00:27:35.201 "data_size": 65536 00:27:35.201 }, 00:27:35.201 { 00:27:35.201 "name": null, 00:27:35.201 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:35.201 "is_configured": false, 00:27:35.201 "data_offset": 0, 00:27:35.201 "data_size": 65536 00:27:35.201 }, 00:27:35.201 { 00:27:35.201 "name": "BaseBdev3", 00:27:35.201 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:35.201 "is_configured": true, 00:27:35.201 "data_offset": 0, 00:27:35.201 "data_size": 65536 00:27:35.201 } 00:27:35.201 ] 00:27:35.201 }' 00:27:35.201 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:35.201 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.766 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.766 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:36.024 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:27:36.024 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:36.282 [2024-05-15 11:20:54.763645] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.282 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.540 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:36.540 "name": "Existed_Raid", 00:27:36.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.540 "strip_size_kb": 64, 00:27:36.540 "state": "configuring", 00:27:36.540 "raid_level": "concat", 00:27:36.540 "superblock": false, 00:27:36.540 "num_base_bdevs": 3, 00:27:36.540 "num_base_bdevs_discovered": 2, 00:27:36.540 "num_base_bdevs_operational": 3, 00:27:36.540 "base_bdevs_list": [ 00:27:36.540 { 00:27:36.540 "name": null, 00:27:36.540 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:36.540 "is_configured": false, 00:27:36.540 "data_offset": 0, 00:27:36.540 "data_size": 65536 00:27:36.540 }, 00:27:36.540 { 00:27:36.540 "name": "BaseBdev2", 00:27:36.540 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:36.540 "is_configured": true, 00:27:36.540 "data_offset": 0, 00:27:36.540 "data_size": 65536 00:27:36.540 }, 00:27:36.540 { 00:27:36.540 "name": "BaseBdev3", 00:27:36.540 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:36.540 "is_configured": true, 00:27:36.540 "data_offset": 0, 00:27:36.540 "data_size": 65536 00:27:36.540 } 00:27:36.540 ] 00:27:36.540 }' 00:27:36.540 11:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:36.540 11:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.106 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.106 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:37.364 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:27:37.364 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.364 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:37.621 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 307d2636-b8a3-4bfe-9885-6c3ec8085e57 00:27:37.948 [2024-05-15 11:20:56.327446] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:37.948 [2024-05-15 11:20:56.327495] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:27:37.948 [2024-05-15 11:20:56.327506] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:37.948 [2024-05-15 11:20:56.327610] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:27:37.948 NewBaseBdev 00:27:37.948 [2024-05-15 11:20:56.328153] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:27:37.948 [2024-05-15 11:20:56.328175] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:27:37.948 [2024-05-15 11:20:56.328364] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:37.948 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:38.207 [ 00:27:38.207 { 00:27:38.207 "name": "NewBaseBdev", 00:27:38.207 "aliases": [ 00:27:38.207 "307d2636-b8a3-4bfe-9885-6c3ec8085e57" 00:27:38.207 ], 00:27:38.207 "product_name": "Malloc disk", 00:27:38.207 "block_size": 512, 00:27:38.207 "num_blocks": 65536, 00:27:38.207 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:38.207 "assigned_rate_limits": { 00:27:38.207 "rw_ios_per_sec": 0, 00:27:38.207 "rw_mbytes_per_sec": 0, 00:27:38.207 "r_mbytes_per_sec": 0, 00:27:38.207 "w_mbytes_per_sec": 0 00:27:38.207 }, 00:27:38.207 "claimed": true, 00:27:38.207 "claim_type": "exclusive_write", 00:27:38.207 "zoned": false, 00:27:38.207 "supported_io_types": { 00:27:38.207 "read": true, 00:27:38.207 "write": true, 00:27:38.207 "unmap": true, 00:27:38.207 "write_zeroes": true, 00:27:38.207 "flush": true, 00:27:38.207 "reset": true, 00:27:38.207 "compare": false, 00:27:38.207 "compare_and_write": false, 00:27:38.207 "abort": true, 00:27:38.207 "nvme_admin": false, 00:27:38.207 "nvme_io": false 00:27:38.207 }, 00:27:38.207 "memory_domains": [ 00:27:38.207 { 00:27:38.207 "dma_device_id": "system", 00:27:38.207 "dma_device_type": 1 00:27:38.207 }, 00:27:38.207 { 00:27:38.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.207 "dma_device_type": 2 00:27:38.207 } 00:27:38.207 ], 00:27:38.207 "driver_specific": {} 00:27:38.207 } 00:27:38.207 ] 00:27:38.207 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:27:38.207 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.208 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.467 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:38.467 "name": "Existed_Raid", 00:27:38.467 "uuid": "3162128e-33d7-47ba-bd2b-6e92f8e4002f", 00:27:38.467 "strip_size_kb": 64, 00:27:38.467 "state": "online", 00:27:38.467 "raid_level": "concat", 00:27:38.467 "superblock": false, 00:27:38.467 "num_base_bdevs": 3, 00:27:38.467 "num_base_bdevs_discovered": 3, 00:27:38.467 "num_base_bdevs_operational": 3, 00:27:38.467 "base_bdevs_list": [ 00:27:38.467 { 00:27:38.467 "name": "NewBaseBdev", 00:27:38.467 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:38.467 "is_configured": true, 00:27:38.467 "data_offset": 0, 00:27:38.467 "data_size": 65536 00:27:38.467 }, 00:27:38.467 { 00:27:38.467 "name": "BaseBdev2", 00:27:38.467 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:38.467 "is_configured": true, 00:27:38.467 "data_offset": 0, 00:27:38.467 "data_size": 65536 00:27:38.467 }, 00:27:38.467 { 00:27:38.467 "name": "BaseBdev3", 00:27:38.467 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:38.467 "is_configured": true, 00:27:38.467 "data_offset": 0, 00:27:38.467 "data_size": 65536 00:27:38.467 } 00:27:38.467 ] 00:27:38.467 }' 00:27:38.467 11:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:38.467 11:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:39.035 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:27:39.036 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:39.036 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:39.293 [2024-05-15 11:20:57.840042] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:39.293 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:39.293 "name": "Existed_Raid", 00:27:39.293 "aliases": [ 00:27:39.293 "3162128e-33d7-47ba-bd2b-6e92f8e4002f" 00:27:39.293 ], 00:27:39.293 "product_name": "Raid Volume", 00:27:39.293 "block_size": 512, 00:27:39.293 "num_blocks": 196608, 00:27:39.293 "uuid": "3162128e-33d7-47ba-bd2b-6e92f8e4002f", 00:27:39.293 "assigned_rate_limits": { 00:27:39.293 "rw_ios_per_sec": 0, 00:27:39.293 "rw_mbytes_per_sec": 0, 00:27:39.293 "r_mbytes_per_sec": 0, 00:27:39.293 "w_mbytes_per_sec": 0 00:27:39.293 }, 00:27:39.293 "claimed": false, 00:27:39.293 "zoned": false, 00:27:39.293 "supported_io_types": { 00:27:39.293 "read": true, 00:27:39.293 "write": true, 00:27:39.293 "unmap": true, 00:27:39.293 "write_zeroes": true, 00:27:39.293 "flush": true, 00:27:39.293 "reset": true, 00:27:39.293 "compare": false, 00:27:39.293 "compare_and_write": false, 00:27:39.293 "abort": false, 00:27:39.293 "nvme_admin": false, 00:27:39.293 "nvme_io": false 00:27:39.293 }, 00:27:39.293 "memory_domains": [ 00:27:39.293 { 00:27:39.293 "dma_device_id": "system", 00:27:39.293 "dma_device_type": 1 00:27:39.293 }, 00:27:39.293 { 00:27:39.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.293 "dma_device_type": 2 00:27:39.293 }, 00:27:39.293 { 00:27:39.293 "dma_device_id": "system", 00:27:39.293 "dma_device_type": 1 00:27:39.293 }, 00:27:39.293 { 00:27:39.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.293 "dma_device_type": 2 00:27:39.293 }, 00:27:39.293 { 00:27:39.294 "dma_device_id": "system", 00:27:39.294 "dma_device_type": 1 00:27:39.294 }, 00:27:39.294 { 00:27:39.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.294 "dma_device_type": 2 00:27:39.294 } 00:27:39.294 ], 00:27:39.294 "driver_specific": { 00:27:39.294 "raid": { 00:27:39.294 "uuid": "3162128e-33d7-47ba-bd2b-6e92f8e4002f", 00:27:39.294 "strip_size_kb": 64, 00:27:39.294 "state": "online", 00:27:39.294 "raid_level": "concat", 00:27:39.294 "superblock": false, 00:27:39.294 "num_base_bdevs": 3, 00:27:39.294 "num_base_bdevs_discovered": 3, 00:27:39.294 "num_base_bdevs_operational": 3, 00:27:39.294 "base_bdevs_list": [ 00:27:39.294 { 00:27:39.294 "name": "NewBaseBdev", 00:27:39.294 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:39.294 "is_configured": true, 00:27:39.294 "data_offset": 0, 00:27:39.294 "data_size": 65536 00:27:39.294 }, 00:27:39.294 { 00:27:39.294 "name": "BaseBdev2", 00:27:39.294 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:39.294 "is_configured": true, 00:27:39.294 "data_offset": 0, 00:27:39.294 "data_size": 65536 00:27:39.294 }, 00:27:39.294 { 00:27:39.294 "name": "BaseBdev3", 00:27:39.294 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:39.294 "is_configured": true, 00:27:39.294 "data_offset": 0, 00:27:39.294 "data_size": 65536 00:27:39.294 } 00:27:39.294 ] 00:27:39.294 } 00:27:39.294 } 00:27:39.294 }' 00:27:39.294 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:39.294 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:27:39.294 BaseBdev2 00:27:39.294 BaseBdev3' 00:27:39.294 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:39.294 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:39.294 11:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:39.552 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:39.552 "name": "NewBaseBdev", 00:27:39.552 "aliases": [ 00:27:39.552 "307d2636-b8a3-4bfe-9885-6c3ec8085e57" 00:27:39.552 ], 00:27:39.552 "product_name": "Malloc disk", 00:27:39.552 "block_size": 512, 00:27:39.552 "num_blocks": 65536, 00:27:39.552 "uuid": "307d2636-b8a3-4bfe-9885-6c3ec8085e57", 00:27:39.552 "assigned_rate_limits": { 00:27:39.552 "rw_ios_per_sec": 0, 00:27:39.552 "rw_mbytes_per_sec": 0, 00:27:39.552 "r_mbytes_per_sec": 0, 00:27:39.552 "w_mbytes_per_sec": 0 00:27:39.552 }, 00:27:39.552 "claimed": true, 00:27:39.552 "claim_type": "exclusive_write", 00:27:39.552 "zoned": false, 00:27:39.552 "supported_io_types": { 00:27:39.552 "read": true, 00:27:39.552 "write": true, 00:27:39.552 "unmap": true, 00:27:39.552 "write_zeroes": true, 00:27:39.552 "flush": true, 00:27:39.552 "reset": true, 00:27:39.552 "compare": false, 00:27:39.552 "compare_and_write": false, 00:27:39.552 "abort": true, 00:27:39.552 "nvme_admin": false, 00:27:39.552 "nvme_io": false 00:27:39.552 }, 00:27:39.552 "memory_domains": [ 00:27:39.552 { 00:27:39.552 "dma_device_id": "system", 00:27:39.552 "dma_device_type": 1 00:27:39.552 }, 00:27:39.552 { 00:27:39.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.552 "dma_device_type": 2 00:27:39.552 } 00:27:39.552 ], 00:27:39.552 "driver_specific": {} 00:27:39.552 }' 00:27:39.552 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:39.552 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:39.811 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:40.069 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:40.328 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:40.328 "name": "BaseBdev2", 00:27:40.328 "aliases": [ 00:27:40.328 "9e6210ce-0ddf-4a7a-8716-8eacb04e382f" 00:27:40.328 ], 00:27:40.328 "product_name": "Malloc disk", 00:27:40.328 "block_size": 512, 00:27:40.328 "num_blocks": 65536, 00:27:40.328 "uuid": "9e6210ce-0ddf-4a7a-8716-8eacb04e382f", 00:27:40.328 "assigned_rate_limits": { 00:27:40.328 "rw_ios_per_sec": 0, 00:27:40.328 "rw_mbytes_per_sec": 0, 00:27:40.328 "r_mbytes_per_sec": 0, 00:27:40.328 "w_mbytes_per_sec": 0 00:27:40.328 }, 00:27:40.328 "claimed": true, 00:27:40.328 "claim_type": "exclusive_write", 00:27:40.328 "zoned": false, 00:27:40.328 "supported_io_types": { 00:27:40.328 "read": true, 00:27:40.328 "write": true, 00:27:40.328 "unmap": true, 00:27:40.328 "write_zeroes": true, 00:27:40.328 "flush": true, 00:27:40.328 "reset": true, 00:27:40.328 "compare": false, 00:27:40.328 "compare_and_write": false, 00:27:40.328 "abort": true, 00:27:40.328 "nvme_admin": false, 00:27:40.328 "nvme_io": false 00:27:40.328 }, 00:27:40.328 "memory_domains": [ 00:27:40.328 { 00:27:40.328 "dma_device_id": "system", 00:27:40.328 "dma_device_type": 1 00:27:40.328 }, 00:27:40.328 { 00:27:40.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.328 "dma_device_type": 2 00:27:40.328 } 00:27:40.328 ], 00:27:40.328 "driver_specific": {} 00:27:40.328 }' 00:27:40.328 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:40.328 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:40.328 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:40.328 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:40.585 11:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:40.585 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:40.843 "name": "BaseBdev3", 00:27:40.843 "aliases": [ 00:27:40.843 "36065f83-dbc8-49c5-b34b-5e4c396e6742" 00:27:40.843 ], 00:27:40.843 "product_name": "Malloc disk", 00:27:40.843 "block_size": 512, 00:27:40.843 "num_blocks": 65536, 00:27:40.843 "uuid": "36065f83-dbc8-49c5-b34b-5e4c396e6742", 00:27:40.843 "assigned_rate_limits": { 00:27:40.843 "rw_ios_per_sec": 0, 00:27:40.843 "rw_mbytes_per_sec": 0, 00:27:40.843 "r_mbytes_per_sec": 0, 00:27:40.843 "w_mbytes_per_sec": 0 00:27:40.843 }, 00:27:40.843 "claimed": true, 00:27:40.843 "claim_type": "exclusive_write", 00:27:40.843 "zoned": false, 00:27:40.843 "supported_io_types": { 00:27:40.843 "read": true, 00:27:40.843 "write": true, 00:27:40.843 "unmap": true, 00:27:40.843 "write_zeroes": true, 00:27:40.843 "flush": true, 00:27:40.843 "reset": true, 00:27:40.843 "compare": false, 00:27:40.843 "compare_and_write": false, 00:27:40.843 "abort": true, 00:27:40.843 "nvme_admin": false, 00:27:40.843 "nvme_io": false 00:27:40.843 }, 00:27:40.843 "memory_domains": [ 00:27:40.843 { 00:27:40.843 "dma_device_id": "system", 00:27:40.843 "dma_device_type": 1 00:27:40.843 }, 00:27:40.843 { 00:27:40.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.843 "dma_device_type": 2 00:27:40.843 } 00:27:40.843 ], 00:27:40.843 "driver_specific": {} 00:27:40.843 }' 00:27:40.843 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:41.101 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:41.357 11:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:41.615 [2024-05-15 11:21:00.163950] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:41.615 [2024-05-15 11:21:00.163998] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:41.615 [2024-05-15 11:21:00.164073] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:41.615 [2024-05-15 11:21:00.164120] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:41.615 [2024-05-15 11:21:00.164132] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 59179 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 59179 ']' 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 59179 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59179 00:27:41.615 killing process with pid 59179 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59179' 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 59179 00:27:41.615 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 59179 00:27:41.615 [2024-05-15 11:21:00.199657] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:41.872 [2024-05-15 11:21:00.451759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:43.250 ************************************ 00:27:43.250 END TEST raid_state_function_test 00:27:43.250 ************************************ 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:27:43.250 00:27:43.250 real 0m31.128s 00:27:43.250 user 0m58.633s 00:27:43.250 sys 0m3.194s 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.250 11:21:01 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:27:43.250 11:21:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:43.250 11:21:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.250 11:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:43.250 ************************************ 00:27:43.250 START TEST raid_state_function_test_sb 00:27:43.250 ************************************ 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:27:43.250 Process raid pid: 60182 00:27:43.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=60182 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 60182' 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 60182 /var/tmp/spdk-raid.sock 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:43.250 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 60182 ']' 00:27:43.251 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:43.251 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.251 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:43.251 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.251 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.251 [2024-05-15 11:21:01.885403] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:27:43.251 [2024-05-15 11:21:01.885654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.509 [2024-05-15 11:21:02.053294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.766 [2024-05-15 11:21:02.274285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.024 [2024-05-15 11:21:02.474281] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:44.024 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.024 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:27:44.024 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:44.282 [2024-05-15 11:21:02.873302] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:44.282 [2024-05-15 11:21:02.873374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:44.282 [2024-05-15 11:21:02.873391] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:44.282 [2024-05-15 11:21:02.873426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:44.282 [2024-05-15 11:21:02.873435] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:44.282 [2024-05-15 11:21:02.873480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.282 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.540 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:44.540 "name": "Existed_Raid", 00:27:44.540 "uuid": "2504fd83-4847-43a1-b5b0-d2884d299218", 00:27:44.540 "strip_size_kb": 64, 00:27:44.540 "state": "configuring", 00:27:44.540 "raid_level": "concat", 00:27:44.540 "superblock": true, 00:27:44.540 "num_base_bdevs": 3, 00:27:44.540 "num_base_bdevs_discovered": 0, 00:27:44.540 "num_base_bdevs_operational": 3, 00:27:44.540 "base_bdevs_list": [ 00:27:44.540 { 00:27:44.540 "name": "BaseBdev1", 00:27:44.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.540 "is_configured": false, 00:27:44.540 "data_offset": 0, 00:27:44.540 "data_size": 0 00:27:44.540 }, 00:27:44.540 { 00:27:44.540 "name": "BaseBdev2", 00:27:44.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.540 "is_configured": false, 00:27:44.540 "data_offset": 0, 00:27:44.540 "data_size": 0 00:27:44.540 }, 00:27:44.540 { 00:27:44.540 "name": "BaseBdev3", 00:27:44.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.540 "is_configured": false, 00:27:44.540 "data_offset": 0, 00:27:44.540 "data_size": 0 00:27:44.540 } 00:27:44.540 ] 00:27:44.540 }' 00:27:44.540 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:44.540 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.472 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:45.472 [2024-05-15 11:21:03.969299] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:45.472 [2024-05-15 11:21:03.969350] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:27:45.472 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:45.729 [2024-05-15 11:21:04.205374] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:45.729 [2024-05-15 11:21:04.205454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:45.729 [2024-05-15 11:21:04.205470] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:45.729 [2024-05-15 11:21:04.205500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:45.729 [2024-05-15 11:21:04.205510] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:45.729 [2024-05-15 11:21:04.205536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:45.729 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:45.987 [2024-05-15 11:21:04.438802] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:45.987 BaseBdev1 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:45.987 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:46.245 [ 00:27:46.245 { 00:27:46.245 "name": "BaseBdev1", 00:27:46.245 "aliases": [ 00:27:46.245 "513fc9aa-95a1-4efd-b453-74e65b2951b3" 00:27:46.245 ], 00:27:46.245 "product_name": "Malloc disk", 00:27:46.245 "block_size": 512, 00:27:46.245 "num_blocks": 65536, 00:27:46.245 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:46.245 "assigned_rate_limits": { 00:27:46.245 "rw_ios_per_sec": 0, 00:27:46.245 "rw_mbytes_per_sec": 0, 00:27:46.245 "r_mbytes_per_sec": 0, 00:27:46.245 "w_mbytes_per_sec": 0 00:27:46.245 }, 00:27:46.245 "claimed": true, 00:27:46.245 "claim_type": "exclusive_write", 00:27:46.245 "zoned": false, 00:27:46.245 "supported_io_types": { 00:27:46.245 "read": true, 00:27:46.245 "write": true, 00:27:46.245 "unmap": true, 00:27:46.245 "write_zeroes": true, 00:27:46.245 "flush": true, 00:27:46.245 "reset": true, 00:27:46.245 "compare": false, 00:27:46.245 "compare_and_write": false, 00:27:46.245 "abort": true, 00:27:46.245 "nvme_admin": false, 00:27:46.245 "nvme_io": false 00:27:46.245 }, 00:27:46.245 "memory_domains": [ 00:27:46.245 { 00:27:46.245 "dma_device_id": "system", 00:27:46.245 "dma_device_type": 1 00:27:46.245 }, 00:27:46.245 { 00:27:46.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.245 "dma_device_type": 2 00:27:46.245 } 00:27:46.245 ], 00:27:46.245 "driver_specific": {} 00:27:46.245 } 00:27:46.245 ] 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.245 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.503 11:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:46.503 "name": "Existed_Raid", 00:27:46.503 "uuid": "0de15105-3ac0-4509-8ed2-864752054826", 00:27:46.503 "strip_size_kb": 64, 00:27:46.503 "state": "configuring", 00:27:46.503 "raid_level": "concat", 00:27:46.503 "superblock": true, 00:27:46.503 "num_base_bdevs": 3, 00:27:46.503 "num_base_bdevs_discovered": 1, 00:27:46.503 "num_base_bdevs_operational": 3, 00:27:46.503 "base_bdevs_list": [ 00:27:46.503 { 00:27:46.503 "name": "BaseBdev1", 00:27:46.503 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:46.503 "is_configured": true, 00:27:46.503 "data_offset": 2048, 00:27:46.503 "data_size": 63488 00:27:46.503 }, 00:27:46.503 { 00:27:46.503 "name": "BaseBdev2", 00:27:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.503 "is_configured": false, 00:27:46.503 "data_offset": 0, 00:27:46.503 "data_size": 0 00:27:46.503 }, 00:27:46.503 { 00:27:46.503 "name": "BaseBdev3", 00:27:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.503 "is_configured": false, 00:27:46.503 "data_offset": 0, 00:27:46.503 "data_size": 0 00:27:46.503 } 00:27:46.503 ] 00:27:46.503 }' 00:27:46.503 11:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:46.503 11:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:47.069 11:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:47.327 [2024-05-15 11:21:05.903007] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:47.327 [2024-05-15 11:21:05.903060] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:27:47.327 11:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:47.586 [2024-05-15 11:21:06.115137] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:47.587 [2024-05-15 11:21:06.116758] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:47.587 [2024-05-15 11:21:06.116832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:47.587 [2024-05-15 11:21:06.116848] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:47.587 [2024-05-15 11:21:06.116876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.587 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.846 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:47.846 "name": "Existed_Raid", 00:27:47.846 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:47.846 "strip_size_kb": 64, 00:27:47.846 "state": "configuring", 00:27:47.846 "raid_level": "concat", 00:27:47.846 "superblock": true, 00:27:47.846 "num_base_bdevs": 3, 00:27:47.846 "num_base_bdevs_discovered": 1, 00:27:47.846 "num_base_bdevs_operational": 3, 00:27:47.846 "base_bdevs_list": [ 00:27:47.846 { 00:27:47.846 "name": "BaseBdev1", 00:27:47.846 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:47.846 "is_configured": true, 00:27:47.846 "data_offset": 2048, 00:27:47.846 "data_size": 63488 00:27:47.846 }, 00:27:47.846 { 00:27:47.846 "name": "BaseBdev2", 00:27:47.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.846 "is_configured": false, 00:27:47.846 "data_offset": 0, 00:27:47.846 "data_size": 0 00:27:47.846 }, 00:27:47.846 { 00:27:47.846 "name": "BaseBdev3", 00:27:47.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.846 "is_configured": false, 00:27:47.846 "data_offset": 0, 00:27:47.846 "data_size": 0 00:27:47.847 } 00:27:47.847 ] 00:27:47.847 }' 00:27:47.847 11:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:47.847 11:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:48.781 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:48.781 [2024-05-15 11:21:07.324755] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:48.781 BaseBdev2 00:27:48.781 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:27:48.781 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:48.781 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:48.781 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:27:48.782 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:48.782 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:48.782 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:49.040 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:49.300 [ 00:27:49.300 { 00:27:49.300 "name": "BaseBdev2", 00:27:49.300 "aliases": [ 00:27:49.300 "8afb889e-17e6-4956-aad5-c144b159f451" 00:27:49.300 ], 00:27:49.300 "product_name": "Malloc disk", 00:27:49.300 "block_size": 512, 00:27:49.300 "num_blocks": 65536, 00:27:49.300 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:49.300 "assigned_rate_limits": { 00:27:49.300 "rw_ios_per_sec": 0, 00:27:49.300 "rw_mbytes_per_sec": 0, 00:27:49.300 "r_mbytes_per_sec": 0, 00:27:49.300 "w_mbytes_per_sec": 0 00:27:49.300 }, 00:27:49.300 "claimed": true, 00:27:49.300 "claim_type": "exclusive_write", 00:27:49.300 "zoned": false, 00:27:49.300 "supported_io_types": { 00:27:49.300 "read": true, 00:27:49.300 "write": true, 00:27:49.300 "unmap": true, 00:27:49.300 "write_zeroes": true, 00:27:49.300 "flush": true, 00:27:49.300 "reset": true, 00:27:49.300 "compare": false, 00:27:49.300 "compare_and_write": false, 00:27:49.300 "abort": true, 00:27:49.300 "nvme_admin": false, 00:27:49.300 "nvme_io": false 00:27:49.300 }, 00:27:49.300 "memory_domains": [ 00:27:49.300 { 00:27:49.300 "dma_device_id": "system", 00:27:49.300 "dma_device_type": 1 00:27:49.300 }, 00:27:49.300 { 00:27:49.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.300 "dma_device_type": 2 00:27:49.300 } 00:27:49.300 ], 00:27:49.300 "driver_specific": {} 00:27:49.300 } 00:27:49.300 ] 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.300 11:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.558 11:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:49.558 "name": "Existed_Raid", 00:27:49.558 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:49.558 "strip_size_kb": 64, 00:27:49.558 "state": "configuring", 00:27:49.558 "raid_level": "concat", 00:27:49.558 "superblock": true, 00:27:49.558 "num_base_bdevs": 3, 00:27:49.558 "num_base_bdevs_discovered": 2, 00:27:49.558 "num_base_bdevs_operational": 3, 00:27:49.558 "base_bdevs_list": [ 00:27:49.558 { 00:27:49.558 "name": "BaseBdev1", 00:27:49.558 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:49.558 "is_configured": true, 00:27:49.558 "data_offset": 2048, 00:27:49.558 "data_size": 63488 00:27:49.558 }, 00:27:49.558 { 00:27:49.558 "name": "BaseBdev2", 00:27:49.558 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:49.558 "is_configured": true, 00:27:49.558 "data_offset": 2048, 00:27:49.558 "data_size": 63488 00:27:49.558 }, 00:27:49.558 { 00:27:49.558 "name": "BaseBdev3", 00:27:49.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.558 "is_configured": false, 00:27:49.558 "data_offset": 0, 00:27:49.558 "data_size": 0 00:27:49.558 } 00:27:49.558 ] 00:27:49.558 }' 00:27:49.558 11:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:49.558 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.124 11:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:50.382 [2024-05-15 11:21:08.950453] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:50.382 [2024-05-15 11:21:08.950628] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:27:50.382 [2024-05-15 11:21:08.950645] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:50.382 [2024-05-15 11:21:08.950740] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:27:50.382 BaseBdev3 00:27:50.382 [2024-05-15 11:21:08.951316] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:27:50.382 [2024-05-15 11:21:08.951362] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:27:50.382 [2024-05-15 11:21:08.951717] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:50.382 11:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:50.641 11:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:50.898 [ 00:27:50.898 { 00:27:50.898 "name": "BaseBdev3", 00:27:50.898 "aliases": [ 00:27:50.898 "0d4b452f-b674-4a4e-821d-18959ab560ce" 00:27:50.898 ], 00:27:50.898 "product_name": "Malloc disk", 00:27:50.898 "block_size": 512, 00:27:50.898 "num_blocks": 65536, 00:27:50.898 "uuid": "0d4b452f-b674-4a4e-821d-18959ab560ce", 00:27:50.898 "assigned_rate_limits": { 00:27:50.898 "rw_ios_per_sec": 0, 00:27:50.898 "rw_mbytes_per_sec": 0, 00:27:50.898 "r_mbytes_per_sec": 0, 00:27:50.898 "w_mbytes_per_sec": 0 00:27:50.898 }, 00:27:50.898 "claimed": true, 00:27:50.898 "claim_type": "exclusive_write", 00:27:50.898 "zoned": false, 00:27:50.898 "supported_io_types": { 00:27:50.898 "read": true, 00:27:50.898 "write": true, 00:27:50.898 "unmap": true, 00:27:50.898 "write_zeroes": true, 00:27:50.898 "flush": true, 00:27:50.898 "reset": true, 00:27:50.898 "compare": false, 00:27:50.898 "compare_and_write": false, 00:27:50.898 "abort": true, 00:27:50.898 "nvme_admin": false, 00:27:50.898 "nvme_io": false 00:27:50.898 }, 00:27:50.898 "memory_domains": [ 00:27:50.898 { 00:27:50.898 "dma_device_id": "system", 00:27:50.898 "dma_device_type": 1 00:27:50.898 }, 00:27:50.898 { 00:27:50.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.898 "dma_device_type": 2 00:27:50.898 } 00:27:50.898 ], 00:27:50.898 "driver_specific": {} 00:27:50.898 } 00:27:50.898 ] 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.898 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.157 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:51.157 "name": "Existed_Raid", 00:27:51.157 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:51.157 "strip_size_kb": 64, 00:27:51.157 "state": "online", 00:27:51.157 "raid_level": "concat", 00:27:51.157 "superblock": true, 00:27:51.157 "num_base_bdevs": 3, 00:27:51.157 "num_base_bdevs_discovered": 3, 00:27:51.157 "num_base_bdevs_operational": 3, 00:27:51.157 "base_bdevs_list": [ 00:27:51.157 { 00:27:51.157 "name": "BaseBdev1", 00:27:51.157 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:51.157 "is_configured": true, 00:27:51.157 "data_offset": 2048, 00:27:51.157 "data_size": 63488 00:27:51.157 }, 00:27:51.157 { 00:27:51.157 "name": "BaseBdev2", 00:27:51.157 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:51.157 "is_configured": true, 00:27:51.157 "data_offset": 2048, 00:27:51.157 "data_size": 63488 00:27:51.157 }, 00:27:51.157 { 00:27:51.157 "name": "BaseBdev3", 00:27:51.157 "uuid": "0d4b452f-b674-4a4e-821d-18959ab560ce", 00:27:51.157 "is_configured": true, 00:27:51.157 "data_offset": 2048, 00:27:51.157 "data_size": 63488 00:27:51.157 } 00:27:51.157 ] 00:27:51.157 }' 00:27:51.157 11:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:51.157 11:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:51.735 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:52.016 [2024-05-15 11:21:10.430997] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:52.016 "name": "Existed_Raid", 00:27:52.016 "aliases": [ 00:27:52.016 "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb" 00:27:52.016 ], 00:27:52.016 "product_name": "Raid Volume", 00:27:52.016 "block_size": 512, 00:27:52.016 "num_blocks": 190464, 00:27:52.016 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:52.016 "assigned_rate_limits": { 00:27:52.016 "rw_ios_per_sec": 0, 00:27:52.016 "rw_mbytes_per_sec": 0, 00:27:52.016 "r_mbytes_per_sec": 0, 00:27:52.016 "w_mbytes_per_sec": 0 00:27:52.016 }, 00:27:52.016 "claimed": false, 00:27:52.016 "zoned": false, 00:27:52.016 "supported_io_types": { 00:27:52.016 "read": true, 00:27:52.016 "write": true, 00:27:52.016 "unmap": true, 00:27:52.016 "write_zeroes": true, 00:27:52.016 "flush": true, 00:27:52.016 "reset": true, 00:27:52.016 "compare": false, 00:27:52.016 "compare_and_write": false, 00:27:52.016 "abort": false, 00:27:52.016 "nvme_admin": false, 00:27:52.016 "nvme_io": false 00:27:52.016 }, 00:27:52.016 "memory_domains": [ 00:27:52.016 { 00:27:52.016 "dma_device_id": "system", 00:27:52.016 "dma_device_type": 1 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.016 "dma_device_type": 2 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "dma_device_id": "system", 00:27:52.016 "dma_device_type": 1 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.016 "dma_device_type": 2 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "dma_device_id": "system", 00:27:52.016 "dma_device_type": 1 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.016 "dma_device_type": 2 00:27:52.016 } 00:27:52.016 ], 00:27:52.016 "driver_specific": { 00:27:52.016 "raid": { 00:27:52.016 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:52.016 "strip_size_kb": 64, 00:27:52.016 "state": "online", 00:27:52.016 "raid_level": "concat", 00:27:52.016 "superblock": true, 00:27:52.016 "num_base_bdevs": 3, 00:27:52.016 "num_base_bdevs_discovered": 3, 00:27:52.016 "num_base_bdevs_operational": 3, 00:27:52.016 "base_bdevs_list": [ 00:27:52.016 { 00:27:52.016 "name": "BaseBdev1", 00:27:52.016 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:52.016 "is_configured": true, 00:27:52.016 "data_offset": 2048, 00:27:52.016 "data_size": 63488 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "name": "BaseBdev2", 00:27:52.016 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:52.016 "is_configured": true, 00:27:52.016 "data_offset": 2048, 00:27:52.016 "data_size": 63488 00:27:52.016 }, 00:27:52.016 { 00:27:52.016 "name": "BaseBdev3", 00:27:52.016 "uuid": "0d4b452f-b674-4a4e-821d-18959ab560ce", 00:27:52.016 "is_configured": true, 00:27:52.016 "data_offset": 2048, 00:27:52.016 "data_size": 63488 00:27:52.016 } 00:27:52.016 ] 00:27:52.016 } 00:27:52.016 } 00:27:52.016 }' 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:27:52.016 BaseBdev2 00:27:52.016 BaseBdev3' 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:52.016 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:52.274 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:52.274 "name": "BaseBdev1", 00:27:52.274 "aliases": [ 00:27:52.274 "513fc9aa-95a1-4efd-b453-74e65b2951b3" 00:27:52.274 ], 00:27:52.274 "product_name": "Malloc disk", 00:27:52.274 "block_size": 512, 00:27:52.274 "num_blocks": 65536, 00:27:52.274 "uuid": "513fc9aa-95a1-4efd-b453-74e65b2951b3", 00:27:52.274 "assigned_rate_limits": { 00:27:52.274 "rw_ios_per_sec": 0, 00:27:52.274 "rw_mbytes_per_sec": 0, 00:27:52.274 "r_mbytes_per_sec": 0, 00:27:52.274 "w_mbytes_per_sec": 0 00:27:52.274 }, 00:27:52.274 "claimed": true, 00:27:52.274 "claim_type": "exclusive_write", 00:27:52.274 "zoned": false, 00:27:52.274 "supported_io_types": { 00:27:52.274 "read": true, 00:27:52.274 "write": true, 00:27:52.274 "unmap": true, 00:27:52.274 "write_zeroes": true, 00:27:52.274 "flush": true, 00:27:52.274 "reset": true, 00:27:52.274 "compare": false, 00:27:52.274 "compare_and_write": false, 00:27:52.274 "abort": true, 00:27:52.274 "nvme_admin": false, 00:27:52.274 "nvme_io": false 00:27:52.274 }, 00:27:52.274 "memory_domains": [ 00:27:52.274 { 00:27:52.274 "dma_device_id": "system", 00:27:52.274 "dma_device_type": 1 00:27:52.274 }, 00:27:52.274 { 00:27:52.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.274 "dma_device_type": 2 00:27:52.274 } 00:27:52.274 ], 00:27:52.274 "driver_specific": {} 00:27:52.274 }' 00:27:52.274 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:52.274 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:52.274 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:52.274 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:52.533 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:52.533 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:52.533 11:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:52.533 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:52.533 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:52.533 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:52.533 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:52.792 "name": "BaseBdev2", 00:27:52.792 "aliases": [ 00:27:52.792 "8afb889e-17e6-4956-aad5-c144b159f451" 00:27:52.792 ], 00:27:52.792 "product_name": "Malloc disk", 00:27:52.792 "block_size": 512, 00:27:52.792 "num_blocks": 65536, 00:27:52.792 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:52.792 "assigned_rate_limits": { 00:27:52.792 "rw_ios_per_sec": 0, 00:27:52.792 "rw_mbytes_per_sec": 0, 00:27:52.792 "r_mbytes_per_sec": 0, 00:27:52.792 "w_mbytes_per_sec": 0 00:27:52.792 }, 00:27:52.792 "claimed": true, 00:27:52.792 "claim_type": "exclusive_write", 00:27:52.792 "zoned": false, 00:27:52.792 "supported_io_types": { 00:27:52.792 "read": true, 00:27:52.792 "write": true, 00:27:52.792 "unmap": true, 00:27:52.792 "write_zeroes": true, 00:27:52.792 "flush": true, 00:27:52.792 "reset": true, 00:27:52.792 "compare": false, 00:27:52.792 "compare_and_write": false, 00:27:52.792 "abort": true, 00:27:52.792 "nvme_admin": false, 00:27:52.792 "nvme_io": false 00:27:52.792 }, 00:27:52.792 "memory_domains": [ 00:27:52.792 { 00:27:52.792 "dma_device_id": "system", 00:27:52.792 "dma_device_type": 1 00:27:52.792 }, 00:27:52.792 { 00:27:52.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.792 "dma_device_type": 2 00:27:52.792 } 00:27:52.792 ], 00:27:52.792 "driver_specific": {} 00:27:52.792 }' 00:27:52.792 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.052 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:53.311 11:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:53.570 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:53.570 "name": "BaseBdev3", 00:27:53.570 "aliases": [ 00:27:53.570 "0d4b452f-b674-4a4e-821d-18959ab560ce" 00:27:53.570 ], 00:27:53.570 "product_name": "Malloc disk", 00:27:53.570 "block_size": 512, 00:27:53.570 "num_blocks": 65536, 00:27:53.570 "uuid": "0d4b452f-b674-4a4e-821d-18959ab560ce", 00:27:53.570 "assigned_rate_limits": { 00:27:53.570 "rw_ios_per_sec": 0, 00:27:53.570 "rw_mbytes_per_sec": 0, 00:27:53.570 "r_mbytes_per_sec": 0, 00:27:53.570 "w_mbytes_per_sec": 0 00:27:53.570 }, 00:27:53.570 "claimed": true, 00:27:53.570 "claim_type": "exclusive_write", 00:27:53.570 "zoned": false, 00:27:53.570 "supported_io_types": { 00:27:53.570 "read": true, 00:27:53.570 "write": true, 00:27:53.570 "unmap": true, 00:27:53.570 "write_zeroes": true, 00:27:53.570 "flush": true, 00:27:53.570 "reset": true, 00:27:53.570 "compare": false, 00:27:53.570 "compare_and_write": false, 00:27:53.570 "abort": true, 00:27:53.570 "nvme_admin": false, 00:27:53.570 "nvme_io": false 00:27:53.570 }, 00:27:53.570 "memory_domains": [ 00:27:53.570 { 00:27:53.570 "dma_device_id": "system", 00:27:53.570 "dma_device_type": 1 00:27:53.570 }, 00:27:53.570 { 00:27:53.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.570 "dma_device_type": 2 00:27:53.570 } 00:27:53.570 ], 00:27:53.570 "driver_specific": {} 00:27:53.570 }' 00:27:53.570 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.570 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:53.830 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:54.089 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:54.089 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:54.089 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:54.089 [2024-05-15 11:21:12.715300] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.089 [2024-05-15 11:21:12.715349] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.089 [2024-05-15 11:21:12.715399] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.347 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.606 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:54.606 "name": "Existed_Raid", 00:27:54.606 "uuid": "7b96d0f4-b9ae-49e8-985c-a6a6e15ac8fb", 00:27:54.606 "strip_size_kb": 64, 00:27:54.606 "state": "offline", 00:27:54.606 "raid_level": "concat", 00:27:54.606 "superblock": true, 00:27:54.606 "num_base_bdevs": 3, 00:27:54.606 "num_base_bdevs_discovered": 2, 00:27:54.606 "num_base_bdevs_operational": 2, 00:27:54.606 "base_bdevs_list": [ 00:27:54.606 { 00:27:54.606 "name": null, 00:27:54.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.606 "is_configured": false, 00:27:54.606 "data_offset": 2048, 00:27:54.606 "data_size": 63488 00:27:54.606 }, 00:27:54.606 { 00:27:54.606 "name": "BaseBdev2", 00:27:54.606 "uuid": "8afb889e-17e6-4956-aad5-c144b159f451", 00:27:54.606 "is_configured": true, 00:27:54.606 "data_offset": 2048, 00:27:54.606 "data_size": 63488 00:27:54.606 }, 00:27:54.606 { 00:27:54.606 "name": "BaseBdev3", 00:27:54.606 "uuid": "0d4b452f-b674-4a4e-821d-18959ab560ce", 00:27:54.606 "is_configured": true, 00:27:54.606 "data_offset": 2048, 00:27:54.606 "data_size": 63488 00:27:54.606 } 00:27:54.606 ] 00:27:54.606 }' 00:27:54.606 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:54.606 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.174 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:55.174 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:55.174 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.174 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:55.436 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:55.436 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:55.436 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:55.694 [2024-05-15 11:21:14.131679] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:55.694 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:55.694 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:55.694 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:55.694 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.953 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:55.953 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:55.953 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:56.212 [2024-05-15 11:21:14.658313] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:56.212 [2024-05-15 11:21:14.658394] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:27:56.212 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:56.212 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:56.212 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.212 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:56.471 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:56.730 BaseBdev2 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:56.730 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:56.988 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:57.246 [ 00:27:57.246 { 00:27:57.246 "name": "BaseBdev2", 00:27:57.246 "aliases": [ 00:27:57.246 "dc1d9fea-e4c8-46d2-b862-282f9d831360" 00:27:57.246 ], 00:27:57.246 "product_name": "Malloc disk", 00:27:57.246 "block_size": 512, 00:27:57.246 "num_blocks": 65536, 00:27:57.246 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:27:57.246 "assigned_rate_limits": { 00:27:57.246 "rw_ios_per_sec": 0, 00:27:57.246 "rw_mbytes_per_sec": 0, 00:27:57.246 "r_mbytes_per_sec": 0, 00:27:57.246 "w_mbytes_per_sec": 0 00:27:57.246 }, 00:27:57.246 "claimed": false, 00:27:57.246 "zoned": false, 00:27:57.246 "supported_io_types": { 00:27:57.246 "read": true, 00:27:57.246 "write": true, 00:27:57.246 "unmap": true, 00:27:57.246 "write_zeroes": true, 00:27:57.246 "flush": true, 00:27:57.246 "reset": true, 00:27:57.246 "compare": false, 00:27:57.246 "compare_and_write": false, 00:27:57.247 "abort": true, 00:27:57.247 "nvme_admin": false, 00:27:57.247 "nvme_io": false 00:27:57.247 }, 00:27:57.247 "memory_domains": [ 00:27:57.247 { 00:27:57.247 "dma_device_id": "system", 00:27:57.247 "dma_device_type": 1 00:27:57.247 }, 00:27:57.247 { 00:27:57.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.247 "dma_device_type": 2 00:27:57.247 } 00:27:57.247 ], 00:27:57.247 "driver_specific": {} 00:27:57.247 } 00:27:57.247 ] 00:27:57.247 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:27:57.247 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:27:57.247 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:57.247 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:57.247 BaseBdev3 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:57.505 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:57.505 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:57.764 [ 00:27:57.764 { 00:27:57.764 "name": "BaseBdev3", 00:27:57.764 "aliases": [ 00:27:57.764 "2e743918-cf9d-4478-97d0-f5c2b9394a1f" 00:27:57.764 ], 00:27:57.764 "product_name": "Malloc disk", 00:27:57.764 "block_size": 512, 00:27:57.764 "num_blocks": 65536, 00:27:57.764 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:27:57.764 "assigned_rate_limits": { 00:27:57.764 "rw_ios_per_sec": 0, 00:27:57.764 "rw_mbytes_per_sec": 0, 00:27:57.764 "r_mbytes_per_sec": 0, 00:27:57.764 "w_mbytes_per_sec": 0 00:27:57.764 }, 00:27:57.764 "claimed": false, 00:27:57.764 "zoned": false, 00:27:57.764 "supported_io_types": { 00:27:57.764 "read": true, 00:27:57.764 "write": true, 00:27:57.764 "unmap": true, 00:27:57.764 "write_zeroes": true, 00:27:57.764 "flush": true, 00:27:57.764 "reset": true, 00:27:57.764 "compare": false, 00:27:57.764 "compare_and_write": false, 00:27:57.764 "abort": true, 00:27:57.764 "nvme_admin": false, 00:27:57.764 "nvme_io": false 00:27:57.764 }, 00:27:57.764 "memory_domains": [ 00:27:57.764 { 00:27:57.764 "dma_device_id": "system", 00:27:57.764 "dma_device_type": 1 00:27:57.764 }, 00:27:57.764 { 00:27:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.764 "dma_device_type": 2 00:27:57.764 } 00:27:57.764 ], 00:27:57.764 "driver_specific": {} 00:27:57.764 } 00:27:57.764 ] 00:27:57.764 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:27:57.764 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:27:57.764 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:27:57.764 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:58.022 [2024-05-15 11:21:16.583721] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:58.022 [2024-05-15 11:21:16.583847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:58.022 [2024-05-15 11:21:16.583880] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:58.022 [2024-05-15 11:21:16.585493] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.022 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.279 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:58.279 "name": "Existed_Raid", 00:27:58.279 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:27:58.279 "strip_size_kb": 64, 00:27:58.279 "state": "configuring", 00:27:58.279 "raid_level": "concat", 00:27:58.279 "superblock": true, 00:27:58.279 "num_base_bdevs": 3, 00:27:58.279 "num_base_bdevs_discovered": 2, 00:27:58.279 "num_base_bdevs_operational": 3, 00:27:58.279 "base_bdevs_list": [ 00:27:58.279 { 00:27:58.279 "name": "BaseBdev1", 00:27:58.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.279 "is_configured": false, 00:27:58.279 "data_offset": 0, 00:27:58.279 "data_size": 0 00:27:58.279 }, 00:27:58.279 { 00:27:58.279 "name": "BaseBdev2", 00:27:58.279 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:27:58.279 "is_configured": true, 00:27:58.279 "data_offset": 2048, 00:27:58.279 "data_size": 63488 00:27:58.279 }, 00:27:58.279 { 00:27:58.279 "name": "BaseBdev3", 00:27:58.279 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:27:58.279 "is_configured": true, 00:27:58.279 "data_offset": 2048, 00:27:58.279 "data_size": 63488 00:27:58.279 } 00:27:58.279 ] 00:27:58.279 }' 00:27:58.279 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:58.279 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:59.214 [2024-05-15 11:21:17.735788] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.214 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.473 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:59.473 "name": "Existed_Raid", 00:27:59.473 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:27:59.473 "strip_size_kb": 64, 00:27:59.473 "state": "configuring", 00:27:59.473 "raid_level": "concat", 00:27:59.473 "superblock": true, 00:27:59.473 "num_base_bdevs": 3, 00:27:59.473 "num_base_bdevs_discovered": 1, 00:27:59.473 "num_base_bdevs_operational": 3, 00:27:59.473 "base_bdevs_list": [ 00:27:59.473 { 00:27:59.473 "name": "BaseBdev1", 00:27:59.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.473 "is_configured": false, 00:27:59.473 "data_offset": 0, 00:27:59.473 "data_size": 0 00:27:59.473 }, 00:27:59.473 { 00:27:59.473 "name": null, 00:27:59.473 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:27:59.473 "is_configured": false, 00:27:59.473 "data_offset": 2048, 00:27:59.473 "data_size": 63488 00:27:59.473 }, 00:27:59.473 { 00:27:59.473 "name": "BaseBdev3", 00:27:59.473 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:27:59.473 "is_configured": true, 00:27:59.473 "data_offset": 2048, 00:27:59.473 "data_size": 63488 00:27:59.473 } 00:27:59.473 ] 00:27:59.473 }' 00:27:59.473 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:59.473 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.411 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.411 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:00.411 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:28:00.411 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:00.670 [2024-05-15 11:21:19.195479] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:00.670 BaseBdev1 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:00.670 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:00.928 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:01.187 [ 00:28:01.187 { 00:28:01.187 "name": "BaseBdev1", 00:28:01.187 "aliases": [ 00:28:01.187 "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499" 00:28:01.187 ], 00:28:01.187 "product_name": "Malloc disk", 00:28:01.187 "block_size": 512, 00:28:01.187 "num_blocks": 65536, 00:28:01.187 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:01.187 "assigned_rate_limits": { 00:28:01.187 "rw_ios_per_sec": 0, 00:28:01.187 "rw_mbytes_per_sec": 0, 00:28:01.187 "r_mbytes_per_sec": 0, 00:28:01.187 "w_mbytes_per_sec": 0 00:28:01.187 }, 00:28:01.187 "claimed": true, 00:28:01.187 "claim_type": "exclusive_write", 00:28:01.187 "zoned": false, 00:28:01.187 "supported_io_types": { 00:28:01.187 "read": true, 00:28:01.187 "write": true, 00:28:01.187 "unmap": true, 00:28:01.187 "write_zeroes": true, 00:28:01.187 "flush": true, 00:28:01.187 "reset": true, 00:28:01.187 "compare": false, 00:28:01.187 "compare_and_write": false, 00:28:01.187 "abort": true, 00:28:01.187 "nvme_admin": false, 00:28:01.187 "nvme_io": false 00:28:01.187 }, 00:28:01.187 "memory_domains": [ 00:28:01.187 { 00:28:01.187 "dma_device_id": "system", 00:28:01.187 "dma_device_type": 1 00:28:01.187 }, 00:28:01.187 { 00:28:01.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.187 "dma_device_type": 2 00:28:01.187 } 00:28:01.187 ], 00:28:01.187 "driver_specific": {} 00:28:01.187 } 00:28:01.187 ] 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.187 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.446 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:01.446 "name": "Existed_Raid", 00:28:01.446 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:01.446 "strip_size_kb": 64, 00:28:01.446 "state": "configuring", 00:28:01.446 "raid_level": "concat", 00:28:01.446 "superblock": true, 00:28:01.446 "num_base_bdevs": 3, 00:28:01.446 "num_base_bdevs_discovered": 2, 00:28:01.446 "num_base_bdevs_operational": 3, 00:28:01.446 "base_bdevs_list": [ 00:28:01.446 { 00:28:01.446 "name": "BaseBdev1", 00:28:01.446 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:01.446 "is_configured": true, 00:28:01.446 "data_offset": 2048, 00:28:01.446 "data_size": 63488 00:28:01.446 }, 00:28:01.446 { 00:28:01.446 "name": null, 00:28:01.446 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:01.446 "is_configured": false, 00:28:01.446 "data_offset": 2048, 00:28:01.446 "data_size": 63488 00:28:01.446 }, 00:28:01.446 { 00:28:01.446 "name": "BaseBdev3", 00:28:01.446 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:01.446 "is_configured": true, 00:28:01.446 "data_offset": 2048, 00:28:01.446 "data_size": 63488 00:28:01.446 } 00:28:01.446 ] 00:28:01.446 }' 00:28:01.446 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:01.446 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.013 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.013 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:02.271 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:02.271 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:02.529 [2024-05-15 11:21:21.035774] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.529 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.786 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.786 "name": "Existed_Raid", 00:28:02.786 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:02.786 "strip_size_kb": 64, 00:28:02.786 "state": "configuring", 00:28:02.786 "raid_level": "concat", 00:28:02.786 "superblock": true, 00:28:02.786 "num_base_bdevs": 3, 00:28:02.786 "num_base_bdevs_discovered": 1, 00:28:02.786 "num_base_bdevs_operational": 3, 00:28:02.787 "base_bdevs_list": [ 00:28:02.787 { 00:28:02.787 "name": "BaseBdev1", 00:28:02.787 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:02.787 "is_configured": true, 00:28:02.787 "data_offset": 2048, 00:28:02.787 "data_size": 63488 00:28:02.787 }, 00:28:02.787 { 00:28:02.787 "name": null, 00:28:02.787 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:02.787 "is_configured": false, 00:28:02.787 "data_offset": 2048, 00:28:02.787 "data_size": 63488 00:28:02.787 }, 00:28:02.787 { 00:28:02.787 "name": null, 00:28:02.787 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:02.787 "is_configured": false, 00:28:02.787 "data_offset": 2048, 00:28:02.787 "data_size": 63488 00:28:02.787 } 00:28:02.787 ] 00:28:02.787 }' 00:28:02.787 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.787 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.723 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.723 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:03.723 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:28:03.723 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:03.981 [2024-05-15 11:21:22.455997] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.981 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.239 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:04.239 "name": "Existed_Raid", 00:28:04.239 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:04.239 "strip_size_kb": 64, 00:28:04.239 "state": "configuring", 00:28:04.239 "raid_level": "concat", 00:28:04.239 "superblock": true, 00:28:04.239 "num_base_bdevs": 3, 00:28:04.239 "num_base_bdevs_discovered": 2, 00:28:04.239 "num_base_bdevs_operational": 3, 00:28:04.239 "base_bdevs_list": [ 00:28:04.239 { 00:28:04.239 "name": "BaseBdev1", 00:28:04.240 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:04.240 "is_configured": true, 00:28:04.240 "data_offset": 2048, 00:28:04.240 "data_size": 63488 00:28:04.240 }, 00:28:04.240 { 00:28:04.240 "name": null, 00:28:04.240 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:04.240 "is_configured": false, 00:28:04.240 "data_offset": 2048, 00:28:04.240 "data_size": 63488 00:28:04.240 }, 00:28:04.240 { 00:28:04.240 "name": "BaseBdev3", 00:28:04.240 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:04.240 "is_configured": true, 00:28:04.240 "data_offset": 2048, 00:28:04.240 "data_size": 63488 00:28:04.240 } 00:28:04.240 ] 00:28:04.240 }' 00:28:04.240 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:04.240 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.806 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.807 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:05.065 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:28:05.065 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:05.324 [2024-05-15 11:21:23.836154] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.324 11:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.583 11:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:05.583 "name": "Existed_Raid", 00:28:05.583 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:05.583 "strip_size_kb": 64, 00:28:05.583 "state": "configuring", 00:28:05.583 "raid_level": "concat", 00:28:05.583 "superblock": true, 00:28:05.583 "num_base_bdevs": 3, 00:28:05.583 "num_base_bdevs_discovered": 1, 00:28:05.583 "num_base_bdevs_operational": 3, 00:28:05.583 "base_bdevs_list": [ 00:28:05.583 { 00:28:05.583 "name": null, 00:28:05.583 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:05.583 "is_configured": false, 00:28:05.583 "data_offset": 2048, 00:28:05.583 "data_size": 63488 00:28:05.583 }, 00:28:05.583 { 00:28:05.583 "name": null, 00:28:05.583 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:05.583 "is_configured": false, 00:28:05.583 "data_offset": 2048, 00:28:05.583 "data_size": 63488 00:28:05.583 }, 00:28:05.583 { 00:28:05.583 "name": "BaseBdev3", 00:28:05.583 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:05.583 "is_configured": true, 00:28:05.583 "data_offset": 2048, 00:28:05.583 "data_size": 63488 00:28:05.583 } 00:28:05.583 ] 00:28:05.583 }' 00:28:05.583 11:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:05.583 11:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.518 11:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.518 11:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:06.518 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:28:06.518 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:06.776 [2024-05-15 11:21:25.216352] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:06.776 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:06.776 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:06.776 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:06.776 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.777 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.036 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:07.036 "name": "Existed_Raid", 00:28:07.036 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:07.036 "strip_size_kb": 64, 00:28:07.036 "state": "configuring", 00:28:07.036 "raid_level": "concat", 00:28:07.036 "superblock": true, 00:28:07.036 "num_base_bdevs": 3, 00:28:07.036 "num_base_bdevs_discovered": 2, 00:28:07.036 "num_base_bdevs_operational": 3, 00:28:07.036 "base_bdevs_list": [ 00:28:07.036 { 00:28:07.036 "name": null, 00:28:07.036 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:07.036 "is_configured": false, 00:28:07.036 "data_offset": 2048, 00:28:07.036 "data_size": 63488 00:28:07.036 }, 00:28:07.036 { 00:28:07.036 "name": "BaseBdev2", 00:28:07.036 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:07.036 "is_configured": true, 00:28:07.036 "data_offset": 2048, 00:28:07.036 "data_size": 63488 00:28:07.036 }, 00:28:07.036 { 00:28:07.036 "name": "BaseBdev3", 00:28:07.036 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:07.036 "is_configured": true, 00:28:07.036 "data_offset": 2048, 00:28:07.036 "data_size": 63488 00:28:07.036 } 00:28:07.036 ] 00:28:07.036 }' 00:28:07.036 11:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:07.036 11:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.603 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.603 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:07.861 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:28:07.861 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.861 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:08.120 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499 00:28:08.378 [2024-05-15 11:21:26.772083] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:08.378 [2024-05-15 11:21:26.772271] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:28:08.378 [2024-05-15 11:21:26.772287] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:08.378 [2024-05-15 11:21:26.772367] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:08.378 [2024-05-15 11:21:26.772616] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:28:08.378 [2024-05-15 11:21:26.772643] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:28:08.378 NewBaseBdev 00:28:08.378 [2024-05-15 11:21:26.772761] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:08.378 11:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:08.378 11:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:08.637 [ 00:28:08.637 { 00:28:08.637 "name": "NewBaseBdev", 00:28:08.637 "aliases": [ 00:28:08.637 "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499" 00:28:08.637 ], 00:28:08.637 "product_name": "Malloc disk", 00:28:08.637 "block_size": 512, 00:28:08.637 "num_blocks": 65536, 00:28:08.637 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:08.637 "assigned_rate_limits": { 00:28:08.637 "rw_ios_per_sec": 0, 00:28:08.637 "rw_mbytes_per_sec": 0, 00:28:08.637 "r_mbytes_per_sec": 0, 00:28:08.637 "w_mbytes_per_sec": 0 00:28:08.637 }, 00:28:08.637 "claimed": true, 00:28:08.637 "claim_type": "exclusive_write", 00:28:08.637 "zoned": false, 00:28:08.637 "supported_io_types": { 00:28:08.637 "read": true, 00:28:08.637 "write": true, 00:28:08.637 "unmap": true, 00:28:08.637 "write_zeroes": true, 00:28:08.637 "flush": true, 00:28:08.637 "reset": true, 00:28:08.637 "compare": false, 00:28:08.637 "compare_and_write": false, 00:28:08.637 "abort": true, 00:28:08.637 "nvme_admin": false, 00:28:08.637 "nvme_io": false 00:28:08.637 }, 00:28:08.637 "memory_domains": [ 00:28:08.637 { 00:28:08.637 "dma_device_id": "system", 00:28:08.637 "dma_device_type": 1 00:28:08.637 }, 00:28:08.637 { 00:28:08.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.637 "dma_device_type": 2 00:28:08.637 } 00:28:08.637 ], 00:28:08.637 "driver_specific": {} 00:28:08.637 } 00:28:08.637 ] 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.637 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.896 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:08.896 "name": "Existed_Raid", 00:28:08.896 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:08.896 "strip_size_kb": 64, 00:28:08.896 "state": "online", 00:28:08.896 "raid_level": "concat", 00:28:08.896 "superblock": true, 00:28:08.896 "num_base_bdevs": 3, 00:28:08.896 "num_base_bdevs_discovered": 3, 00:28:08.896 "num_base_bdevs_operational": 3, 00:28:08.896 "base_bdevs_list": [ 00:28:08.896 { 00:28:08.896 "name": "NewBaseBdev", 00:28:08.896 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:08.896 "is_configured": true, 00:28:08.896 "data_offset": 2048, 00:28:08.896 "data_size": 63488 00:28:08.896 }, 00:28:08.896 { 00:28:08.896 "name": "BaseBdev2", 00:28:08.896 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:08.896 "is_configured": true, 00:28:08.896 "data_offset": 2048, 00:28:08.896 "data_size": 63488 00:28:08.896 }, 00:28:08.896 { 00:28:08.896 "name": "BaseBdev3", 00:28:08.896 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:08.896 "is_configured": true, 00:28:08.896 "data_offset": 2048, 00:28:08.896 "data_size": 63488 00:28:08.896 } 00:28:08.896 ] 00:28:08.896 }' 00:28:08.896 11:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:08.896 11:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:28:09.468 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:09.726 [2024-05-15 11:21:28.276514] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.726 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:28:09.726 "name": "Existed_Raid", 00:28:09.726 "aliases": [ 00:28:09.726 "41957cf1-4c17-4f1e-a922-152a7705597d" 00:28:09.726 ], 00:28:09.726 "product_name": "Raid Volume", 00:28:09.726 "block_size": 512, 00:28:09.726 "num_blocks": 190464, 00:28:09.726 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:09.726 "assigned_rate_limits": { 00:28:09.726 "rw_ios_per_sec": 0, 00:28:09.726 "rw_mbytes_per_sec": 0, 00:28:09.726 "r_mbytes_per_sec": 0, 00:28:09.726 "w_mbytes_per_sec": 0 00:28:09.726 }, 00:28:09.726 "claimed": false, 00:28:09.726 "zoned": false, 00:28:09.726 "supported_io_types": { 00:28:09.726 "read": true, 00:28:09.726 "write": true, 00:28:09.726 "unmap": true, 00:28:09.726 "write_zeroes": true, 00:28:09.726 "flush": true, 00:28:09.726 "reset": true, 00:28:09.726 "compare": false, 00:28:09.726 "compare_and_write": false, 00:28:09.726 "abort": false, 00:28:09.726 "nvme_admin": false, 00:28:09.726 "nvme_io": false 00:28:09.726 }, 00:28:09.726 "memory_domains": [ 00:28:09.726 { 00:28:09.726 "dma_device_id": "system", 00:28:09.726 "dma_device_type": 1 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.726 "dma_device_type": 2 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "dma_device_id": "system", 00:28:09.726 "dma_device_type": 1 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.726 "dma_device_type": 2 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "dma_device_id": "system", 00:28:09.726 "dma_device_type": 1 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.726 "dma_device_type": 2 00:28:09.726 } 00:28:09.726 ], 00:28:09.726 "driver_specific": { 00:28:09.726 "raid": { 00:28:09.726 "uuid": "41957cf1-4c17-4f1e-a922-152a7705597d", 00:28:09.726 "strip_size_kb": 64, 00:28:09.726 "state": "online", 00:28:09.726 "raid_level": "concat", 00:28:09.726 "superblock": true, 00:28:09.726 "num_base_bdevs": 3, 00:28:09.726 "num_base_bdevs_discovered": 3, 00:28:09.726 "num_base_bdevs_operational": 3, 00:28:09.726 "base_bdevs_list": [ 00:28:09.726 { 00:28:09.726 "name": "NewBaseBdev", 00:28:09.726 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:09.726 "is_configured": true, 00:28:09.726 "data_offset": 2048, 00:28:09.726 "data_size": 63488 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "name": "BaseBdev2", 00:28:09.726 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:09.726 "is_configured": true, 00:28:09.726 "data_offset": 2048, 00:28:09.726 "data_size": 63488 00:28:09.726 }, 00:28:09.726 { 00:28:09.726 "name": "BaseBdev3", 00:28:09.726 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:09.726 "is_configured": true, 00:28:09.726 "data_offset": 2048, 00:28:09.726 "data_size": 63488 00:28:09.726 } 00:28:09.726 ] 00:28:09.726 } 00:28:09.726 } 00:28:09.726 }' 00:28:09.726 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.726 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:28:09.726 BaseBdev2 00:28:09.726 BaseBdev3' 00:28:09.726 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:09.726 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:09.727 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:09.984 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:09.984 "name": "NewBaseBdev", 00:28:09.984 "aliases": [ 00:28:09.984 "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499" 00:28:09.984 ], 00:28:09.984 "product_name": "Malloc disk", 00:28:09.984 "block_size": 512, 00:28:09.984 "num_blocks": 65536, 00:28:09.984 "uuid": "5b5ec03c-b789-4fab-ab8c-5d5bb9dd2499", 00:28:09.984 "assigned_rate_limits": { 00:28:09.984 "rw_ios_per_sec": 0, 00:28:09.984 "rw_mbytes_per_sec": 0, 00:28:09.984 "r_mbytes_per_sec": 0, 00:28:09.984 "w_mbytes_per_sec": 0 00:28:09.984 }, 00:28:09.984 "claimed": true, 00:28:09.984 "claim_type": "exclusive_write", 00:28:09.984 "zoned": false, 00:28:09.984 "supported_io_types": { 00:28:09.984 "read": true, 00:28:09.984 "write": true, 00:28:09.984 "unmap": true, 00:28:09.985 "write_zeroes": true, 00:28:09.985 "flush": true, 00:28:09.985 "reset": true, 00:28:09.985 "compare": false, 00:28:09.985 "compare_and_write": false, 00:28:09.985 "abort": true, 00:28:09.985 "nvme_admin": false, 00:28:09.985 "nvme_io": false 00:28:09.985 }, 00:28:09.985 "memory_domains": [ 00:28:09.985 { 00:28:09.985 "dma_device_id": "system", 00:28:09.985 "dma_device_type": 1 00:28:09.985 }, 00:28:09.985 { 00:28:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.985 "dma_device_type": 2 00:28:09.985 } 00:28:09.985 ], 00:28:09.985 "driver_specific": {} 00:28:09.985 }' 00:28:09.985 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:09.985 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:10.243 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:10.502 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:10.502 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:10.502 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:10.502 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:10.502 11:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:10.760 "name": "BaseBdev2", 00:28:10.760 "aliases": [ 00:28:10.760 "dc1d9fea-e4c8-46d2-b862-282f9d831360" 00:28:10.760 ], 00:28:10.760 "product_name": "Malloc disk", 00:28:10.760 "block_size": 512, 00:28:10.760 "num_blocks": 65536, 00:28:10.760 "uuid": "dc1d9fea-e4c8-46d2-b862-282f9d831360", 00:28:10.760 "assigned_rate_limits": { 00:28:10.760 "rw_ios_per_sec": 0, 00:28:10.760 "rw_mbytes_per_sec": 0, 00:28:10.760 "r_mbytes_per_sec": 0, 00:28:10.760 "w_mbytes_per_sec": 0 00:28:10.760 }, 00:28:10.760 "claimed": true, 00:28:10.760 "claim_type": "exclusive_write", 00:28:10.760 "zoned": false, 00:28:10.760 "supported_io_types": { 00:28:10.760 "read": true, 00:28:10.760 "write": true, 00:28:10.760 "unmap": true, 00:28:10.760 "write_zeroes": true, 00:28:10.760 "flush": true, 00:28:10.760 "reset": true, 00:28:10.760 "compare": false, 00:28:10.760 "compare_and_write": false, 00:28:10.760 "abort": true, 00:28:10.760 "nvme_admin": false, 00:28:10.760 "nvme_io": false 00:28:10.760 }, 00:28:10.760 "memory_domains": [ 00:28:10.760 { 00:28:10.760 "dma_device_id": "system", 00:28:10.760 "dma_device_type": 1 00:28:10.760 }, 00:28:10.760 { 00:28:10.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.760 "dma_device_type": 2 00:28:10.760 } 00:28:10.760 ], 00:28:10.760 "driver_specific": {} 00:28:10.760 }' 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:10.760 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:11.018 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:11.275 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:11.275 "name": "BaseBdev3", 00:28:11.275 "aliases": [ 00:28:11.275 "2e743918-cf9d-4478-97d0-f5c2b9394a1f" 00:28:11.275 ], 00:28:11.275 "product_name": "Malloc disk", 00:28:11.275 "block_size": 512, 00:28:11.275 "num_blocks": 65536, 00:28:11.276 "uuid": "2e743918-cf9d-4478-97d0-f5c2b9394a1f", 00:28:11.276 "assigned_rate_limits": { 00:28:11.276 "rw_ios_per_sec": 0, 00:28:11.276 "rw_mbytes_per_sec": 0, 00:28:11.276 "r_mbytes_per_sec": 0, 00:28:11.276 "w_mbytes_per_sec": 0 00:28:11.276 }, 00:28:11.276 "claimed": true, 00:28:11.276 "claim_type": "exclusive_write", 00:28:11.276 "zoned": false, 00:28:11.276 "supported_io_types": { 00:28:11.276 "read": true, 00:28:11.276 "write": true, 00:28:11.276 "unmap": true, 00:28:11.276 "write_zeroes": true, 00:28:11.276 "flush": true, 00:28:11.276 "reset": true, 00:28:11.276 "compare": false, 00:28:11.276 "compare_and_write": false, 00:28:11.276 "abort": true, 00:28:11.276 "nvme_admin": false, 00:28:11.276 "nvme_io": false 00:28:11.276 }, 00:28:11.276 "memory_domains": [ 00:28:11.276 { 00:28:11.276 "dma_device_id": "system", 00:28:11.276 "dma_device_type": 1 00:28:11.276 }, 00:28:11.276 { 00:28:11.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.276 "dma_device_type": 2 00:28:11.276 } 00:28:11.276 ], 00:28:11.276 "driver_specific": {} 00:28:11.276 }' 00:28:11.276 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:11.276 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:11.533 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:11.533 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:11.533 11:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:11.534 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:11.534 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:11.534 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:11.534 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:11.534 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:11.792 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:11.792 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:11.792 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:12.050 [2024-05-15 11:21:30.448585] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:12.050 [2024-05-15 11:21:30.448633] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:12.050 [2024-05-15 11:21:30.448705] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:12.050 [2024-05-15 11:21:30.448750] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:12.050 [2024-05-15 11:21:30.448762] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 60182 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 60182 ']' 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 60182 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60182 00:28:12.050 killing process with pid 60182 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60182' 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 60182 00:28:12.050 11:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 60182 00:28:12.050 [2024-05-15 11:21:30.481988] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:12.308 [2024-05-15 11:21:30.732957] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:13.680 11:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:28:13.680 ************************************ 00:28:13.680 END TEST raid_state_function_test_sb 00:28:13.680 ************************************ 00:28:13.680 00:28:13.680 real 0m30.256s 00:28:13.680 user 0m56.814s 00:28:13.680 sys 0m3.079s 00:28:13.680 11:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.680 11:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.680 11:21:32 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:28:13.680 11:21:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:28:13.680 11:21:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.681 11:21:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:13.681 ************************************ 00:28:13.681 START TEST raid_superblock_test 00:28:13.681 ************************************ 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:28:13.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61173 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61173 /var/tmp/spdk-raid.sock 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 61173 ']' 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.681 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.681 [2024-05-15 11:21:32.181716] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:28:13.681 [2024-05-15 11:21:32.182046] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:28:13.938 [2024-05-15 11:21:32.332848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.938 [2024-05-15 11:21:32.548324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.195 [2024-05-15 11:21:32.746962] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:14.453 11:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:14.710 malloc1 00:28:14.710 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:14.967 [2024-05-15 11:21:33.432901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:14.967 [2024-05-15 11:21:33.432999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.967 [2024-05-15 11:21:33.433087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:28:14.967 [2024-05-15 11:21:33.433136] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.967 [2024-05-15 11:21:33.435003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.967 [2024-05-15 11:21:33.435042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:14.967 pt1 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:14.967 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:15.224 malloc2 00:28:15.224 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:15.224 [2024-05-15 11:21:33.859866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:15.224 [2024-05-15 11:21:33.859959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.224 [2024-05-15 11:21:33.860011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:28:15.224 [2024-05-15 11:21:33.860051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.483 [2024-05-15 11:21:33.863064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.483 [2024-05-15 11:21:33.863186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:15.483 pt2 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:15.483 11:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:15.483 malloc3 00:28:15.483 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:15.740 [2024-05-15 11:21:34.330506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:15.740 [2024-05-15 11:21:34.330614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.740 [2024-05-15 11:21:34.330667] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:28:15.740 [2024-05-15 11:21:34.330717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.740 [2024-05-15 11:21:34.332797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.740 [2024-05-15 11:21:34.332861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:15.740 pt3 00:28:15.740 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:15.740 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:15.740 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:28:15.998 [2024-05-15 11:21:34.554622] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:15.998 [2024-05-15 11:21:34.556233] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:15.998 [2024-05-15 11:21:34.556286] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:15.998 [2024-05-15 11:21:34.556410] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:28:15.998 [2024-05-15 11:21:34.556425] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:15.998 [2024-05-15 11:21:34.556536] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:28:15.998 [2024-05-15 11:21:34.556802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:28:15.998 [2024-05-15 11:21:34.556833] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:28:15.998 [2024-05-15 11:21:34.556947] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.998 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:15.998 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.999 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.256 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:16.256 "name": "raid_bdev1", 00:28:16.256 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:16.256 "strip_size_kb": 64, 00:28:16.256 "state": "online", 00:28:16.256 "raid_level": "concat", 00:28:16.256 "superblock": true, 00:28:16.256 "num_base_bdevs": 3, 00:28:16.256 "num_base_bdevs_discovered": 3, 00:28:16.256 "num_base_bdevs_operational": 3, 00:28:16.256 "base_bdevs_list": [ 00:28:16.256 { 00:28:16.256 "name": "pt1", 00:28:16.256 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:16.256 "is_configured": true, 00:28:16.256 "data_offset": 2048, 00:28:16.256 "data_size": 63488 00:28:16.256 }, 00:28:16.256 { 00:28:16.256 "name": "pt2", 00:28:16.256 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:16.256 "is_configured": true, 00:28:16.256 "data_offset": 2048, 00:28:16.256 "data_size": 63488 00:28:16.256 }, 00:28:16.256 { 00:28:16.257 "name": "pt3", 00:28:16.257 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:16.257 "is_configured": true, 00:28:16.257 "data_offset": 2048, 00:28:16.257 "data_size": 63488 00:28:16.257 } 00:28:16.257 ] 00:28:16.257 }' 00:28:16.257 11:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:16.257 11:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:28:16.821 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.079 [2024-05-15 11:21:35.634920] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:28:17.079 "name": "raid_bdev1", 00:28:17.079 "aliases": [ 00:28:17.079 "2e2c5a6d-2f52-458b-802d-bde02e67bed2" 00:28:17.079 ], 00:28:17.079 "product_name": "Raid Volume", 00:28:17.079 "block_size": 512, 00:28:17.079 "num_blocks": 190464, 00:28:17.079 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:17.079 "assigned_rate_limits": { 00:28:17.079 "rw_ios_per_sec": 0, 00:28:17.079 "rw_mbytes_per_sec": 0, 00:28:17.079 "r_mbytes_per_sec": 0, 00:28:17.079 "w_mbytes_per_sec": 0 00:28:17.079 }, 00:28:17.079 "claimed": false, 00:28:17.079 "zoned": false, 00:28:17.079 "supported_io_types": { 00:28:17.079 "read": true, 00:28:17.079 "write": true, 00:28:17.079 "unmap": true, 00:28:17.079 "write_zeroes": true, 00:28:17.079 "flush": true, 00:28:17.079 "reset": true, 00:28:17.079 "compare": false, 00:28:17.079 "compare_and_write": false, 00:28:17.079 "abort": false, 00:28:17.079 "nvme_admin": false, 00:28:17.079 "nvme_io": false 00:28:17.079 }, 00:28:17.079 "memory_domains": [ 00:28:17.079 { 00:28:17.079 "dma_device_id": "system", 00:28:17.079 "dma_device_type": 1 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.079 "dma_device_type": 2 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "dma_device_id": "system", 00:28:17.079 "dma_device_type": 1 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.079 "dma_device_type": 2 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "dma_device_id": "system", 00:28:17.079 "dma_device_type": 1 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.079 "dma_device_type": 2 00:28:17.079 } 00:28:17.079 ], 00:28:17.079 "driver_specific": { 00:28:17.079 "raid": { 00:28:17.079 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:17.079 "strip_size_kb": 64, 00:28:17.079 "state": "online", 00:28:17.079 "raid_level": "concat", 00:28:17.079 "superblock": true, 00:28:17.079 "num_base_bdevs": 3, 00:28:17.079 "num_base_bdevs_discovered": 3, 00:28:17.079 "num_base_bdevs_operational": 3, 00:28:17.079 "base_bdevs_list": [ 00:28:17.079 { 00:28:17.079 "name": "pt1", 00:28:17.079 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:17.079 "is_configured": true, 00:28:17.079 "data_offset": 2048, 00:28:17.079 "data_size": 63488 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "name": "pt2", 00:28:17.079 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:17.079 "is_configured": true, 00:28:17.079 "data_offset": 2048, 00:28:17.079 "data_size": 63488 00:28:17.079 }, 00:28:17.079 { 00:28:17.079 "name": "pt3", 00:28:17.079 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:17.079 "is_configured": true, 00:28:17.079 "data_offset": 2048, 00:28:17.079 "data_size": 63488 00:28:17.079 } 00:28:17.079 ] 00:28:17.079 } 00:28:17.079 } 00:28:17.079 }' 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:28:17.079 pt2 00:28:17.079 pt3' 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:17.079 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:17.337 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:17.337 "name": "pt1", 00:28:17.337 "aliases": [ 00:28:17.337 "491336b2-79c5-54d5-a3f4-90415c2c2e7d" 00:28:17.337 ], 00:28:17.337 "product_name": "passthru", 00:28:17.337 "block_size": 512, 00:28:17.337 "num_blocks": 65536, 00:28:17.337 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:17.337 "assigned_rate_limits": { 00:28:17.337 "rw_ios_per_sec": 0, 00:28:17.337 "rw_mbytes_per_sec": 0, 00:28:17.337 "r_mbytes_per_sec": 0, 00:28:17.337 "w_mbytes_per_sec": 0 00:28:17.337 }, 00:28:17.337 "claimed": true, 00:28:17.337 "claim_type": "exclusive_write", 00:28:17.337 "zoned": false, 00:28:17.337 "supported_io_types": { 00:28:17.337 "read": true, 00:28:17.337 "write": true, 00:28:17.337 "unmap": true, 00:28:17.337 "write_zeroes": true, 00:28:17.337 "flush": true, 00:28:17.337 "reset": true, 00:28:17.337 "compare": false, 00:28:17.337 "compare_and_write": false, 00:28:17.337 "abort": true, 00:28:17.337 "nvme_admin": false, 00:28:17.337 "nvme_io": false 00:28:17.337 }, 00:28:17.337 "memory_domains": [ 00:28:17.337 { 00:28:17.337 "dma_device_id": "system", 00:28:17.337 "dma_device_type": 1 00:28:17.337 }, 00:28:17.337 { 00:28:17.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.337 "dma_device_type": 2 00:28:17.337 } 00:28:17.337 ], 00:28:17.337 "driver_specific": { 00:28:17.337 "passthru": { 00:28:17.337 "name": "pt1", 00:28:17.337 "base_bdev_name": "malloc1" 00:28:17.337 } 00:28:17.337 } 00:28:17.337 }' 00:28:17.337 11:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:17.595 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:17.852 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:17.852 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:17.852 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:17.853 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:17.853 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:17.853 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:17.853 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:17.853 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:18.110 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:18.110 "name": "pt2", 00:28:18.110 "aliases": [ 00:28:18.110 "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2" 00:28:18.110 ], 00:28:18.110 "product_name": "passthru", 00:28:18.110 "block_size": 512, 00:28:18.110 "num_blocks": 65536, 00:28:18.110 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:18.110 "assigned_rate_limits": { 00:28:18.110 "rw_ios_per_sec": 0, 00:28:18.110 "rw_mbytes_per_sec": 0, 00:28:18.110 "r_mbytes_per_sec": 0, 00:28:18.110 "w_mbytes_per_sec": 0 00:28:18.110 }, 00:28:18.110 "claimed": true, 00:28:18.110 "claim_type": "exclusive_write", 00:28:18.110 "zoned": false, 00:28:18.110 "supported_io_types": { 00:28:18.110 "read": true, 00:28:18.110 "write": true, 00:28:18.110 "unmap": true, 00:28:18.110 "write_zeroes": true, 00:28:18.110 "flush": true, 00:28:18.110 "reset": true, 00:28:18.110 "compare": false, 00:28:18.110 "compare_and_write": false, 00:28:18.110 "abort": true, 00:28:18.110 "nvme_admin": false, 00:28:18.110 "nvme_io": false 00:28:18.110 }, 00:28:18.110 "memory_domains": [ 00:28:18.110 { 00:28:18.110 "dma_device_id": "system", 00:28:18.110 "dma_device_type": 1 00:28:18.110 }, 00:28:18.110 { 00:28:18.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.110 "dma_device_type": 2 00:28:18.110 } 00:28:18.110 ], 00:28:18.110 "driver_specific": { 00:28:18.110 "passthru": { 00:28:18.110 "name": "pt2", 00:28:18.110 "base_bdev_name": "malloc2" 00:28:18.110 } 00:28:18.110 } 00:28:18.110 }' 00:28:18.110 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:18.110 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:18.110 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:18.110 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:18.368 11:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:18.368 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:18.368 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:18.368 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:18.368 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:18.627 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:18.627 "name": "pt3", 00:28:18.627 "aliases": [ 00:28:18.627 "53227ec8-e2b7-57ef-b3c0-ac8892d57b69" 00:28:18.627 ], 00:28:18.627 "product_name": "passthru", 00:28:18.627 "block_size": 512, 00:28:18.627 "num_blocks": 65536, 00:28:18.627 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:18.627 "assigned_rate_limits": { 00:28:18.627 "rw_ios_per_sec": 0, 00:28:18.627 "rw_mbytes_per_sec": 0, 00:28:18.627 "r_mbytes_per_sec": 0, 00:28:18.627 "w_mbytes_per_sec": 0 00:28:18.627 }, 00:28:18.627 "claimed": true, 00:28:18.627 "claim_type": "exclusive_write", 00:28:18.627 "zoned": false, 00:28:18.627 "supported_io_types": { 00:28:18.627 "read": true, 00:28:18.627 "write": true, 00:28:18.627 "unmap": true, 00:28:18.627 "write_zeroes": true, 00:28:18.627 "flush": true, 00:28:18.627 "reset": true, 00:28:18.627 "compare": false, 00:28:18.627 "compare_and_write": false, 00:28:18.627 "abort": true, 00:28:18.627 "nvme_admin": false, 00:28:18.627 "nvme_io": false 00:28:18.627 }, 00:28:18.627 "memory_domains": [ 00:28:18.627 { 00:28:18.627 "dma_device_id": "system", 00:28:18.627 "dma_device_type": 1 00:28:18.627 }, 00:28:18.627 { 00:28:18.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.627 "dma_device_type": 2 00:28:18.627 } 00:28:18.627 ], 00:28:18.627 "driver_specific": { 00:28:18.627 "passthru": { 00:28:18.627 "name": "pt3", 00:28:18.627 "base_bdev_name": "malloc3" 00:28:18.627 } 00:28:18.627 } 00:28:18.627 }' 00:28:18.627 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:18.885 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:19.143 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:19.416 [2024-05-15 11:21:37.927721] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:19.416 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e2c5a6d-2f52-458b-802d-bde02e67bed2 00:28:19.416 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2e2c5a6d-2f52-458b-802d-bde02e67bed2 ']' 00:28:19.416 11:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:19.674 [2024-05-15 11:21:38.175621] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.674 [2024-05-15 11:21:38.175655] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.674 [2024-05-15 11:21:38.175729] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.674 [2024-05-15 11:21:38.175819] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.675 [2024-05-15 11:21:38.175831] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:28:19.675 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:19.675 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.933 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:19.933 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:19.933 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:19.933 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:20.191 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:20.191 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:20.191 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:20.191 11:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:20.450 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:20.450 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:20.708 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:20.967 [2024-05-15 11:21:39.567819] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:20.967 [2024-05-15 11:21:39.569519] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:20.967 [2024-05-15 11:21:39.569572] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:20.967 [2024-05-15 11:21:39.569613] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:20.967 [2024-05-15 11:21:39.569708] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:20.967 [2024-05-15 11:21:39.569797] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:20.967 [2024-05-15 11:21:39.569903] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:20.967 [2024-05-15 11:21:39.569925] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:28:20.967 request: 00:28:20.967 { 00:28:20.967 "name": "raid_bdev1", 00:28:20.967 "raid_level": "concat", 00:28:20.967 "base_bdevs": [ 00:28:20.967 "malloc1", 00:28:20.967 "malloc2", 00:28:20.967 "malloc3" 00:28:20.967 ], 00:28:20.967 "strip_size_kb": 64, 00:28:20.967 "superblock": false, 00:28:20.967 "method": "bdev_raid_create", 00:28:20.967 "req_id": 1 00:28:20.967 } 00:28:20.967 Got JSON-RPC error response 00:28:20.967 response: 00:28:20.967 { 00:28:20.967 "code": -17, 00:28:20.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:20.967 } 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.967 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:21.224 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:21.224 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:21.224 11:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:21.482 [2024-05-15 11:21:40.003806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:21.482 [2024-05-15 11:21:40.003921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.482 [2024-05-15 11:21:40.003973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:28:21.482 [2024-05-15 11:21:40.004010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.482 pt1 00:28:21.482 [2024-05-15 11:21:40.005998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.482 [2024-05-15 11:21:40.006038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:21.482 [2024-05-15 11:21:40.006144] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:21.482 [2024-05-15 11:21:40.006223] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.482 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.740 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:21.740 "name": "raid_bdev1", 00:28:21.740 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:21.740 "strip_size_kb": 64, 00:28:21.740 "state": "configuring", 00:28:21.740 "raid_level": "concat", 00:28:21.740 "superblock": true, 00:28:21.740 "num_base_bdevs": 3, 00:28:21.740 "num_base_bdevs_discovered": 1, 00:28:21.740 "num_base_bdevs_operational": 3, 00:28:21.740 "base_bdevs_list": [ 00:28:21.740 { 00:28:21.740 "name": "pt1", 00:28:21.740 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:21.740 "is_configured": true, 00:28:21.740 "data_offset": 2048, 00:28:21.740 "data_size": 63488 00:28:21.740 }, 00:28:21.740 { 00:28:21.740 "name": null, 00:28:21.740 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:21.740 "is_configured": false, 00:28:21.740 "data_offset": 2048, 00:28:21.740 "data_size": 63488 00:28:21.740 }, 00:28:21.740 { 00:28:21.740 "name": null, 00:28:21.740 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:21.740 "is_configured": false, 00:28:21.740 "data_offset": 2048, 00:28:21.740 "data_size": 63488 00:28:21.740 } 00:28:21.740 ] 00:28:21.740 }' 00:28:21.740 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:21.740 11:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.305 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:28:22.305 11:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:22.563 [2024-05-15 11:21:41.112070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:22.563 [2024-05-15 11:21:41.112190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.563 [2024-05-15 11:21:41.112245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:28:22.563 [2024-05-15 11:21:41.112269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.563 [2024-05-15 11:21:41.112653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.563 [2024-05-15 11:21:41.112689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:22.563 [2024-05-15 11:21:41.112804] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:22.563 [2024-05-15 11:21:41.112855] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:22.563 pt2 00:28:22.563 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:22.821 [2024-05-15 11:21:41.312071] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.821 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.079 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:23.080 "name": "raid_bdev1", 00:28:23.080 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:23.080 "strip_size_kb": 64, 00:28:23.080 "state": "configuring", 00:28:23.080 "raid_level": "concat", 00:28:23.080 "superblock": true, 00:28:23.080 "num_base_bdevs": 3, 00:28:23.080 "num_base_bdevs_discovered": 1, 00:28:23.080 "num_base_bdevs_operational": 3, 00:28:23.080 "base_bdevs_list": [ 00:28:23.080 { 00:28:23.080 "name": "pt1", 00:28:23.080 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:23.080 "is_configured": true, 00:28:23.080 "data_offset": 2048, 00:28:23.080 "data_size": 63488 00:28:23.080 }, 00:28:23.080 { 00:28:23.080 "name": null, 00:28:23.080 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:23.080 "is_configured": false, 00:28:23.080 "data_offset": 2048, 00:28:23.080 "data_size": 63488 00:28:23.080 }, 00:28:23.080 { 00:28:23.080 "name": null, 00:28:23.080 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:23.080 "is_configured": false, 00:28:23.080 "data_offset": 2048, 00:28:23.080 "data_size": 63488 00:28:23.080 } 00:28:23.080 ] 00:28:23.080 }' 00:28:23.080 11:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:23.080 11:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:24.014 [2024-05-15 11:21:42.568297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:24.014 [2024-05-15 11:21:42.568382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.014 [2024-05-15 11:21:42.568430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:28:24.014 [2024-05-15 11:21:42.568463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.014 [2024-05-15 11:21:42.569086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.014 [2024-05-15 11:21:42.569144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:24.014 [2024-05-15 11:21:42.569244] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:24.014 [2024-05-15 11:21:42.569272] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:24.014 pt2 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:24.014 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:24.273 [2024-05-15 11:21:42.820342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:24.273 [2024-05-15 11:21:42.820441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.273 [2024-05-15 11:21:42.820489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:28:24.273 [2024-05-15 11:21:42.820549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.273 [2024-05-15 11:21:42.821081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.273 [2024-05-15 11:21:42.821127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:24.273 [2024-05-15 11:21:42.821235] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:24.273 [2024-05-15 11:21:42.821263] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:24.273 [2024-05-15 11:21:42.821351] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:28:24.273 [2024-05-15 11:21:42.821364] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:24.273 [2024-05-15 11:21:42.821455] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:24.273 [2024-05-15 11:21:42.821657] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:28:24.273 [2024-05-15 11:21:42.821672] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:28:24.273 [2024-05-15 11:21:42.821782] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.273 pt3 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.273 11:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.532 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:24.532 "name": "raid_bdev1", 00:28:24.532 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:24.532 "strip_size_kb": 64, 00:28:24.532 "state": "online", 00:28:24.532 "raid_level": "concat", 00:28:24.532 "superblock": true, 00:28:24.532 "num_base_bdevs": 3, 00:28:24.532 "num_base_bdevs_discovered": 3, 00:28:24.532 "num_base_bdevs_operational": 3, 00:28:24.532 "base_bdevs_list": [ 00:28:24.532 { 00:28:24.532 "name": "pt1", 00:28:24.532 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:24.532 "is_configured": true, 00:28:24.532 "data_offset": 2048, 00:28:24.532 "data_size": 63488 00:28:24.532 }, 00:28:24.532 { 00:28:24.532 "name": "pt2", 00:28:24.532 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:24.532 "is_configured": true, 00:28:24.532 "data_offset": 2048, 00:28:24.532 "data_size": 63488 00:28:24.532 }, 00:28:24.532 { 00:28:24.532 "name": "pt3", 00:28:24.532 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:24.532 "is_configured": true, 00:28:24.532 "data_offset": 2048, 00:28:24.532 "data_size": 63488 00:28:24.532 } 00:28:24.532 ] 00:28:24.532 }' 00:28:24.532 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:24.532 11:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:25.463 11:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:28:25.463 [2024-05-15 11:21:44.012801] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.463 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:28:25.463 "name": "raid_bdev1", 00:28:25.463 "aliases": [ 00:28:25.463 "2e2c5a6d-2f52-458b-802d-bde02e67bed2" 00:28:25.463 ], 00:28:25.463 "product_name": "Raid Volume", 00:28:25.463 "block_size": 512, 00:28:25.463 "num_blocks": 190464, 00:28:25.463 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:25.463 "assigned_rate_limits": { 00:28:25.463 "rw_ios_per_sec": 0, 00:28:25.463 "rw_mbytes_per_sec": 0, 00:28:25.463 "r_mbytes_per_sec": 0, 00:28:25.463 "w_mbytes_per_sec": 0 00:28:25.463 }, 00:28:25.463 "claimed": false, 00:28:25.463 "zoned": false, 00:28:25.463 "supported_io_types": { 00:28:25.463 "read": true, 00:28:25.463 "write": true, 00:28:25.463 "unmap": true, 00:28:25.463 "write_zeroes": true, 00:28:25.463 "flush": true, 00:28:25.463 "reset": true, 00:28:25.463 "compare": false, 00:28:25.463 "compare_and_write": false, 00:28:25.463 "abort": false, 00:28:25.463 "nvme_admin": false, 00:28:25.463 "nvme_io": false 00:28:25.463 }, 00:28:25.463 "memory_domains": [ 00:28:25.464 { 00:28:25.464 "dma_device_id": "system", 00:28:25.464 "dma_device_type": 1 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.464 "dma_device_type": 2 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "dma_device_id": "system", 00:28:25.464 "dma_device_type": 1 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.464 "dma_device_type": 2 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "dma_device_id": "system", 00:28:25.464 "dma_device_type": 1 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.464 "dma_device_type": 2 00:28:25.464 } 00:28:25.464 ], 00:28:25.464 "driver_specific": { 00:28:25.464 "raid": { 00:28:25.464 "uuid": "2e2c5a6d-2f52-458b-802d-bde02e67bed2", 00:28:25.464 "strip_size_kb": 64, 00:28:25.464 "state": "online", 00:28:25.464 "raid_level": "concat", 00:28:25.464 "superblock": true, 00:28:25.464 "num_base_bdevs": 3, 00:28:25.464 "num_base_bdevs_discovered": 3, 00:28:25.464 "num_base_bdevs_operational": 3, 00:28:25.464 "base_bdevs_list": [ 00:28:25.464 { 00:28:25.464 "name": "pt1", 00:28:25.464 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "name": "pt2", 00:28:25.464 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "name": "pt3", 00:28:25.464 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 } 00:28:25.464 ] 00:28:25.464 } 00:28:25.464 } 00:28:25.464 }' 00:28:25.464 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:25.464 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:28:25.464 pt2 00:28:25.464 pt3' 00:28:25.464 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:25.464 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:25.464 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:25.721 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:25.721 "name": "pt1", 00:28:25.721 "aliases": [ 00:28:25.721 "491336b2-79c5-54d5-a3f4-90415c2c2e7d" 00:28:25.721 ], 00:28:25.721 "product_name": "passthru", 00:28:25.721 "block_size": 512, 00:28:25.721 "num_blocks": 65536, 00:28:25.721 "uuid": "491336b2-79c5-54d5-a3f4-90415c2c2e7d", 00:28:25.721 "assigned_rate_limits": { 00:28:25.721 "rw_ios_per_sec": 0, 00:28:25.721 "rw_mbytes_per_sec": 0, 00:28:25.721 "r_mbytes_per_sec": 0, 00:28:25.721 "w_mbytes_per_sec": 0 00:28:25.721 }, 00:28:25.721 "claimed": true, 00:28:25.721 "claim_type": "exclusive_write", 00:28:25.721 "zoned": false, 00:28:25.721 "supported_io_types": { 00:28:25.721 "read": true, 00:28:25.721 "write": true, 00:28:25.721 "unmap": true, 00:28:25.721 "write_zeroes": true, 00:28:25.721 "flush": true, 00:28:25.721 "reset": true, 00:28:25.721 "compare": false, 00:28:25.721 "compare_and_write": false, 00:28:25.721 "abort": true, 00:28:25.721 "nvme_admin": false, 00:28:25.721 "nvme_io": false 00:28:25.721 }, 00:28:25.721 "memory_domains": [ 00:28:25.721 { 00:28:25.721 "dma_device_id": "system", 00:28:25.721 "dma_device_type": 1 00:28:25.721 }, 00:28:25.721 { 00:28:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.721 "dma_device_type": 2 00:28:25.721 } 00:28:25.721 ], 00:28:25.721 "driver_specific": { 00:28:25.721 "passthru": { 00:28:25.722 "name": "pt1", 00:28:25.722 "base_bdev_name": "malloc1" 00:28:25.722 } 00:28:25.722 } 00:28:25.722 }' 00:28:25.722 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:25.722 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:25.979 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:26.237 11:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:26.496 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:26.496 "name": "pt2", 00:28:26.496 "aliases": [ 00:28:26.496 "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2" 00:28:26.496 ], 00:28:26.496 "product_name": "passthru", 00:28:26.496 "block_size": 512, 00:28:26.496 "num_blocks": 65536, 00:28:26.496 "uuid": "fc1e2872-f72d-5aaa-8d2a-e714d81a8de2", 00:28:26.496 "assigned_rate_limits": { 00:28:26.496 "rw_ios_per_sec": 0, 00:28:26.496 "rw_mbytes_per_sec": 0, 00:28:26.496 "r_mbytes_per_sec": 0, 00:28:26.496 "w_mbytes_per_sec": 0 00:28:26.496 }, 00:28:26.496 "claimed": true, 00:28:26.496 "claim_type": "exclusive_write", 00:28:26.496 "zoned": false, 00:28:26.496 "supported_io_types": { 00:28:26.496 "read": true, 00:28:26.496 "write": true, 00:28:26.496 "unmap": true, 00:28:26.496 "write_zeroes": true, 00:28:26.496 "flush": true, 00:28:26.496 "reset": true, 00:28:26.496 "compare": false, 00:28:26.496 "compare_and_write": false, 00:28:26.496 "abort": true, 00:28:26.496 "nvme_admin": false, 00:28:26.496 "nvme_io": false 00:28:26.496 }, 00:28:26.496 "memory_domains": [ 00:28:26.496 { 00:28:26.496 "dma_device_id": "system", 00:28:26.496 "dma_device_type": 1 00:28:26.496 }, 00:28:26.496 { 00:28:26.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.496 "dma_device_type": 2 00:28:26.496 } 00:28:26.496 ], 00:28:26.496 "driver_specific": { 00:28:26.496 "passthru": { 00:28:26.496 "name": "pt2", 00:28:26.496 "base_bdev_name": "malloc2" 00:28:26.496 } 00:28:26.496 } 00:28:26.496 }' 00:28:26.496 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:26.496 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:26.779 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:26.780 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:27.038 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:27.038 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:27.038 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:27.038 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:27.038 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:27.297 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:27.297 "name": "pt3", 00:28:27.297 "aliases": [ 00:28:27.297 "53227ec8-e2b7-57ef-b3c0-ac8892d57b69" 00:28:27.297 ], 00:28:27.297 "product_name": "passthru", 00:28:27.297 "block_size": 512, 00:28:27.297 "num_blocks": 65536, 00:28:27.297 "uuid": "53227ec8-e2b7-57ef-b3c0-ac8892d57b69", 00:28:27.297 "assigned_rate_limits": { 00:28:27.297 "rw_ios_per_sec": 0, 00:28:27.297 "rw_mbytes_per_sec": 0, 00:28:27.297 "r_mbytes_per_sec": 0, 00:28:27.297 "w_mbytes_per_sec": 0 00:28:27.297 }, 00:28:27.297 "claimed": true, 00:28:27.297 "claim_type": "exclusive_write", 00:28:27.297 "zoned": false, 00:28:27.297 "supported_io_types": { 00:28:27.297 "read": true, 00:28:27.297 "write": true, 00:28:27.297 "unmap": true, 00:28:27.297 "write_zeroes": true, 00:28:27.297 "flush": true, 00:28:27.297 "reset": true, 00:28:27.297 "compare": false, 00:28:27.297 "compare_and_write": false, 00:28:27.297 "abort": true, 00:28:27.297 "nvme_admin": false, 00:28:27.297 "nvme_io": false 00:28:27.297 }, 00:28:27.297 "memory_domains": [ 00:28:27.297 { 00:28:27.297 "dma_device_id": "system", 00:28:27.297 "dma_device_type": 1 00:28:27.297 }, 00:28:27.297 { 00:28:27.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.297 "dma_device_type": 2 00:28:27.297 } 00:28:27.297 ], 00:28:27.297 "driver_specific": { 00:28:27.297 "passthru": { 00:28:27.297 "name": "pt3", 00:28:27.297 "base_bdev_name": "malloc3" 00:28:27.297 } 00:28:27.297 } 00:28:27.297 }' 00:28:27.297 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:27.297 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:27.297 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:27.297 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:27.555 11:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:27.555 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:27.555 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:27.555 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:27.555 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:27.555 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:27.813 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:27.813 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:27.813 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:27.813 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:28.072 [2024-05-15 11:21:46.509132] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2e2c5a6d-2f52-458b-802d-bde02e67bed2 '!=' 2e2c5a6d-2f52-458b-802d-bde02e67bed2 ']' 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 61173 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 61173 ']' 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 61173 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61173 00:28:28.072 killing process with pid 61173 00:28:28.072 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:28.073 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:28.073 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61173' 00:28:28.073 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 61173 00:28:28.073 11:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 61173 00:28:28.073 [2024-05-15 11:21:46.550723] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:28.073 [2024-05-15 11:21:46.550804] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.073 [2024-05-15 11:21:46.550860] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.073 [2024-05-15 11:21:46.550874] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:28:28.330 [2024-05-15 11:21:46.803483] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:29.705 ************************************ 00:28:29.705 END TEST raid_superblock_test 00:28:29.705 ************************************ 00:28:29.705 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:28:29.705 00:28:29.705 real 0m16.060s 00:28:29.705 user 0m29.110s 00:28:29.705 sys 0m1.654s 00:28:29.705 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.705 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.705 11:21:48 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:28:29.705 11:21:48 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:28:29.705 11:21:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:28:29.705 11:21:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.705 11:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.705 ************************************ 00:28:29.705 START TEST raid_state_function_test 00:28:29.705 ************************************ 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:28:29.705 Process raid pid: 61674 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=61674 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 61674' 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 61674 /var/tmp/spdk-raid.sock 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 61674 ']' 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:29.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:29.705 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.705 [2024-05-15 11:21:48.290184] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:28:29.705 [2024-05-15 11:21:48.290389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.966 [2024-05-15 11:21:48.452174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.226 [2024-05-15 11:21:48.710478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.491 [2024-05-15 11:21:48.918986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.754 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:30.754 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:28:30.754 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:30.754 [2024-05-15 11:21:49.384579] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:30.754 [2024-05-15 11:21:49.384688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:30.754 [2024-05-15 11:21:49.384709] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:30.754 [2024-05-15 11:21:49.384735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:30.754 [2024-05-15 11:21:49.384747] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:30.754 [2024-05-15 11:21:49.384802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.014 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.272 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:31.272 "name": "Existed_Raid", 00:28:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.272 "strip_size_kb": 0, 00:28:31.272 "state": "configuring", 00:28:31.272 "raid_level": "raid1", 00:28:31.272 "superblock": false, 00:28:31.272 "num_base_bdevs": 3, 00:28:31.272 "num_base_bdevs_discovered": 0, 00:28:31.272 "num_base_bdevs_operational": 3, 00:28:31.272 "base_bdevs_list": [ 00:28:31.272 { 00:28:31.272 "name": "BaseBdev1", 00:28:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.272 "is_configured": false, 00:28:31.272 "data_offset": 0, 00:28:31.272 "data_size": 0 00:28:31.272 }, 00:28:31.272 { 00:28:31.272 "name": "BaseBdev2", 00:28:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.272 "is_configured": false, 00:28:31.272 "data_offset": 0, 00:28:31.272 "data_size": 0 00:28:31.272 }, 00:28:31.272 { 00:28:31.272 "name": "BaseBdev3", 00:28:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.272 "is_configured": false, 00:28:31.272 "data_offset": 0, 00:28:31.272 "data_size": 0 00:28:31.272 } 00:28:31.272 ] 00:28:31.272 }' 00:28:31.272 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:31.272 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.839 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:32.097 [2024-05-15 11:21:50.556605] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:32.097 [2024-05-15 11:21:50.556665] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:28:32.097 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:32.355 [2024-05-15 11:21:50.800630] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:32.355 [2024-05-15 11:21:50.800722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:32.355 [2024-05-15 11:21:50.800739] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:32.355 [2024-05-15 11:21:50.800770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:32.355 [2024-05-15 11:21:50.800781] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:32.355 [2024-05-15 11:21:50.801012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:32.355 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:32.613 [2024-05-15 11:21:51.102725] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:32.613 BaseBdev1 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:32.613 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:32.871 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:33.130 [ 00:28:33.130 { 00:28:33.130 "name": "BaseBdev1", 00:28:33.130 "aliases": [ 00:28:33.130 "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6" 00:28:33.130 ], 00:28:33.130 "product_name": "Malloc disk", 00:28:33.130 "block_size": 512, 00:28:33.130 "num_blocks": 65536, 00:28:33.130 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:33.130 "assigned_rate_limits": { 00:28:33.130 "rw_ios_per_sec": 0, 00:28:33.130 "rw_mbytes_per_sec": 0, 00:28:33.130 "r_mbytes_per_sec": 0, 00:28:33.130 "w_mbytes_per_sec": 0 00:28:33.130 }, 00:28:33.130 "claimed": true, 00:28:33.130 "claim_type": "exclusive_write", 00:28:33.130 "zoned": false, 00:28:33.130 "supported_io_types": { 00:28:33.130 "read": true, 00:28:33.130 "write": true, 00:28:33.130 "unmap": true, 00:28:33.130 "write_zeroes": true, 00:28:33.130 "flush": true, 00:28:33.130 "reset": true, 00:28:33.130 "compare": false, 00:28:33.130 "compare_and_write": false, 00:28:33.130 "abort": true, 00:28:33.130 "nvme_admin": false, 00:28:33.130 "nvme_io": false 00:28:33.130 }, 00:28:33.130 "memory_domains": [ 00:28:33.130 { 00:28:33.130 "dma_device_id": "system", 00:28:33.130 "dma_device_type": 1 00:28:33.130 }, 00:28:33.130 { 00:28:33.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.130 "dma_device_type": 2 00:28:33.130 } 00:28:33.130 ], 00:28:33.130 "driver_specific": {} 00:28:33.130 } 00:28:33.130 ] 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.130 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:33.388 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:33.388 "name": "Existed_Raid", 00:28:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.388 "strip_size_kb": 0, 00:28:33.388 "state": "configuring", 00:28:33.388 "raid_level": "raid1", 00:28:33.388 "superblock": false, 00:28:33.388 "num_base_bdevs": 3, 00:28:33.388 "num_base_bdevs_discovered": 1, 00:28:33.388 "num_base_bdevs_operational": 3, 00:28:33.388 "base_bdevs_list": [ 00:28:33.388 { 00:28:33.388 "name": "BaseBdev1", 00:28:33.388 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:33.388 "is_configured": true, 00:28:33.388 "data_offset": 0, 00:28:33.388 "data_size": 65536 00:28:33.388 }, 00:28:33.388 { 00:28:33.388 "name": "BaseBdev2", 00:28:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.388 "is_configured": false, 00:28:33.388 "data_offset": 0, 00:28:33.388 "data_size": 0 00:28:33.388 }, 00:28:33.388 { 00:28:33.388 "name": "BaseBdev3", 00:28:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.388 "is_configured": false, 00:28:33.388 "data_offset": 0, 00:28:33.388 "data_size": 0 00:28:33.388 } 00:28:33.388 ] 00:28:33.388 }' 00:28:33.388 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:33.388 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.955 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:34.215 [2024-05-15 11:21:52.610965] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:34.215 [2024-05-15 11:21:52.611034] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:28:34.215 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:34.473 [2024-05-15 11:21:52.863047] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:34.473 [2024-05-15 11:21:52.864650] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:34.473 [2024-05-15 11:21:52.864708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:34.473 [2024-05-15 11:21:52.864722] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:34.473 [2024-05-15 11:21:52.864749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:34.473 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.474 11:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:34.474 11:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:34.474 "name": "Existed_Raid", 00:28:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.474 "strip_size_kb": 0, 00:28:34.474 "state": "configuring", 00:28:34.474 "raid_level": "raid1", 00:28:34.474 "superblock": false, 00:28:34.474 "num_base_bdevs": 3, 00:28:34.474 "num_base_bdevs_discovered": 1, 00:28:34.474 "num_base_bdevs_operational": 3, 00:28:34.474 "base_bdevs_list": [ 00:28:34.474 { 00:28:34.474 "name": "BaseBdev1", 00:28:34.474 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:34.474 "is_configured": true, 00:28:34.474 "data_offset": 0, 00:28:34.474 "data_size": 65536 00:28:34.474 }, 00:28:34.474 { 00:28:34.474 "name": "BaseBdev2", 00:28:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.474 "is_configured": false, 00:28:34.474 "data_offset": 0, 00:28:34.474 "data_size": 0 00:28:34.474 }, 00:28:34.474 { 00:28:34.474 "name": "BaseBdev3", 00:28:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.474 "is_configured": false, 00:28:34.474 "data_offset": 0, 00:28:34.474 "data_size": 0 00:28:34.474 } 00:28:34.474 ] 00:28:34.474 }' 00:28:34.474 11:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:34.474 11:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.415 11:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:35.674 [2024-05-15 11:21:54.056421] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.674 BaseBdev2 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:35.674 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:35.933 [ 00:28:35.933 { 00:28:35.933 "name": "BaseBdev2", 00:28:35.933 "aliases": [ 00:28:35.933 "2981401f-dc76-4aea-9a28-3a4f2398f5d2" 00:28:35.933 ], 00:28:35.933 "product_name": "Malloc disk", 00:28:35.933 "block_size": 512, 00:28:35.933 "num_blocks": 65536, 00:28:35.933 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:35.933 "assigned_rate_limits": { 00:28:35.933 "rw_ios_per_sec": 0, 00:28:35.933 "rw_mbytes_per_sec": 0, 00:28:35.933 "r_mbytes_per_sec": 0, 00:28:35.933 "w_mbytes_per_sec": 0 00:28:35.933 }, 00:28:35.933 "claimed": true, 00:28:35.933 "claim_type": "exclusive_write", 00:28:35.933 "zoned": false, 00:28:35.933 "supported_io_types": { 00:28:35.933 "read": true, 00:28:35.933 "write": true, 00:28:35.933 "unmap": true, 00:28:35.933 "write_zeroes": true, 00:28:35.933 "flush": true, 00:28:35.933 "reset": true, 00:28:35.933 "compare": false, 00:28:35.933 "compare_and_write": false, 00:28:35.933 "abort": true, 00:28:35.933 "nvme_admin": false, 00:28:35.933 "nvme_io": false 00:28:35.933 }, 00:28:35.933 "memory_domains": [ 00:28:35.933 { 00:28:35.933 "dma_device_id": "system", 00:28:35.933 "dma_device_type": 1 00:28:35.933 }, 00:28:35.933 { 00:28:35.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:35.933 "dma_device_type": 2 00:28:35.933 } 00:28:35.933 ], 00:28:35.933 "driver_specific": {} 00:28:35.933 } 00:28:35.933 ] 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.933 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.192 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:36.192 "name": "Existed_Raid", 00:28:36.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.192 "strip_size_kb": 0, 00:28:36.192 "state": "configuring", 00:28:36.192 "raid_level": "raid1", 00:28:36.192 "superblock": false, 00:28:36.192 "num_base_bdevs": 3, 00:28:36.192 "num_base_bdevs_discovered": 2, 00:28:36.192 "num_base_bdevs_operational": 3, 00:28:36.192 "base_bdevs_list": [ 00:28:36.192 { 00:28:36.192 "name": "BaseBdev1", 00:28:36.192 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:36.192 "is_configured": true, 00:28:36.192 "data_offset": 0, 00:28:36.192 "data_size": 65536 00:28:36.192 }, 00:28:36.192 { 00:28:36.192 "name": "BaseBdev2", 00:28:36.192 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:36.192 "is_configured": true, 00:28:36.192 "data_offset": 0, 00:28:36.192 "data_size": 65536 00:28:36.192 }, 00:28:36.192 { 00:28:36.192 "name": "BaseBdev3", 00:28:36.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.192 "is_configured": false, 00:28:36.192 "data_offset": 0, 00:28:36.192 "data_size": 0 00:28:36.192 } 00:28:36.192 ] 00:28:36.192 }' 00:28:36.192 11:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:36.193 11:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:37.127 [2024-05-15 11:21:55.717892] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:37.127 [2024-05-15 11:21:55.717950] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:28:37.127 [2024-05-15 11:21:55.717961] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:37.127 [2024-05-15 11:21:55.718074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:28:37.127 [2024-05-15 11:21:55.718333] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:28:37.127 [2024-05-15 11:21:55.718347] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:28:37.127 [2024-05-15 11:21:55.718538] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.127 BaseBdev3 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:37.127 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:37.385 11:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:37.701 [ 00:28:37.701 { 00:28:37.701 "name": "BaseBdev3", 00:28:37.701 "aliases": [ 00:28:37.701 "db277649-2910-4af4-83e7-4002d52bf22f" 00:28:37.701 ], 00:28:37.701 "product_name": "Malloc disk", 00:28:37.701 "block_size": 512, 00:28:37.701 "num_blocks": 65536, 00:28:37.701 "uuid": "db277649-2910-4af4-83e7-4002d52bf22f", 00:28:37.701 "assigned_rate_limits": { 00:28:37.701 "rw_ios_per_sec": 0, 00:28:37.701 "rw_mbytes_per_sec": 0, 00:28:37.701 "r_mbytes_per_sec": 0, 00:28:37.701 "w_mbytes_per_sec": 0 00:28:37.701 }, 00:28:37.701 "claimed": true, 00:28:37.701 "claim_type": "exclusive_write", 00:28:37.701 "zoned": false, 00:28:37.701 "supported_io_types": { 00:28:37.701 "read": true, 00:28:37.701 "write": true, 00:28:37.701 "unmap": true, 00:28:37.701 "write_zeroes": true, 00:28:37.701 "flush": true, 00:28:37.701 "reset": true, 00:28:37.701 "compare": false, 00:28:37.701 "compare_and_write": false, 00:28:37.701 "abort": true, 00:28:37.701 "nvme_admin": false, 00:28:37.701 "nvme_io": false 00:28:37.701 }, 00:28:37.701 "memory_domains": [ 00:28:37.701 { 00:28:37.701 "dma_device_id": "system", 00:28:37.701 "dma_device_type": 1 00:28:37.701 }, 00:28:37.701 { 00:28:37.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.701 "dma_device_type": 2 00:28:37.701 } 00:28:37.701 ], 00:28:37.701 "driver_specific": {} 00:28:37.701 } 00:28:37.701 ] 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.701 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.961 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:37.961 "name": "Existed_Raid", 00:28:37.961 "uuid": "b679538d-4892-45d0-ac2e-755a37d37ee4", 00:28:37.961 "strip_size_kb": 0, 00:28:37.961 "state": "online", 00:28:37.961 "raid_level": "raid1", 00:28:37.961 "superblock": false, 00:28:37.961 "num_base_bdevs": 3, 00:28:37.961 "num_base_bdevs_discovered": 3, 00:28:37.961 "num_base_bdevs_operational": 3, 00:28:37.961 "base_bdevs_list": [ 00:28:37.961 { 00:28:37.961 "name": "BaseBdev1", 00:28:37.961 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:37.961 "is_configured": true, 00:28:37.961 "data_offset": 0, 00:28:37.961 "data_size": 65536 00:28:37.961 }, 00:28:37.961 { 00:28:37.961 "name": "BaseBdev2", 00:28:37.961 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:37.961 "is_configured": true, 00:28:37.961 "data_offset": 0, 00:28:37.961 "data_size": 65536 00:28:37.961 }, 00:28:37.961 { 00:28:37.961 "name": "BaseBdev3", 00:28:37.961 "uuid": "db277649-2910-4af4-83e7-4002d52bf22f", 00:28:37.961 "is_configured": true, 00:28:37.961 "data_offset": 0, 00:28:37.961 "data_size": 65536 00:28:37.961 } 00:28:37.961 ] 00:28:37.961 }' 00:28:37.961 11:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:37.961 11:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:28:38.896 [2024-05-15 11:21:57.430371] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:28:38.896 "name": "Existed_Raid", 00:28:38.896 "aliases": [ 00:28:38.896 "b679538d-4892-45d0-ac2e-755a37d37ee4" 00:28:38.896 ], 00:28:38.896 "product_name": "Raid Volume", 00:28:38.896 "block_size": 512, 00:28:38.896 "num_blocks": 65536, 00:28:38.896 "uuid": "b679538d-4892-45d0-ac2e-755a37d37ee4", 00:28:38.896 "assigned_rate_limits": { 00:28:38.896 "rw_ios_per_sec": 0, 00:28:38.896 "rw_mbytes_per_sec": 0, 00:28:38.896 "r_mbytes_per_sec": 0, 00:28:38.896 "w_mbytes_per_sec": 0 00:28:38.896 }, 00:28:38.896 "claimed": false, 00:28:38.896 "zoned": false, 00:28:38.896 "supported_io_types": { 00:28:38.896 "read": true, 00:28:38.896 "write": true, 00:28:38.896 "unmap": false, 00:28:38.896 "write_zeroes": true, 00:28:38.896 "flush": false, 00:28:38.896 "reset": true, 00:28:38.896 "compare": false, 00:28:38.896 "compare_and_write": false, 00:28:38.896 "abort": false, 00:28:38.896 "nvme_admin": false, 00:28:38.896 "nvme_io": false 00:28:38.896 }, 00:28:38.896 "memory_domains": [ 00:28:38.896 { 00:28:38.896 "dma_device_id": "system", 00:28:38.896 "dma_device_type": 1 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.896 "dma_device_type": 2 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "dma_device_id": "system", 00:28:38.896 "dma_device_type": 1 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.896 "dma_device_type": 2 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "dma_device_id": "system", 00:28:38.896 "dma_device_type": 1 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.896 "dma_device_type": 2 00:28:38.896 } 00:28:38.896 ], 00:28:38.896 "driver_specific": { 00:28:38.896 "raid": { 00:28:38.896 "uuid": "b679538d-4892-45d0-ac2e-755a37d37ee4", 00:28:38.896 "strip_size_kb": 0, 00:28:38.896 "state": "online", 00:28:38.896 "raid_level": "raid1", 00:28:38.896 "superblock": false, 00:28:38.896 "num_base_bdevs": 3, 00:28:38.896 "num_base_bdevs_discovered": 3, 00:28:38.896 "num_base_bdevs_operational": 3, 00:28:38.896 "base_bdevs_list": [ 00:28:38.896 { 00:28:38.896 "name": "BaseBdev1", 00:28:38.896 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:38.896 "is_configured": true, 00:28:38.896 "data_offset": 0, 00:28:38.896 "data_size": 65536 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "name": "BaseBdev2", 00:28:38.896 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:38.896 "is_configured": true, 00:28:38.896 "data_offset": 0, 00:28:38.896 "data_size": 65536 00:28:38.896 }, 00:28:38.896 { 00:28:38.896 "name": "BaseBdev3", 00:28:38.896 "uuid": "db277649-2910-4af4-83e7-4002d52bf22f", 00:28:38.896 "is_configured": true, 00:28:38.896 "data_offset": 0, 00:28:38.896 "data_size": 65536 00:28:38.896 } 00:28:38.896 ] 00:28:38.896 } 00:28:38.896 } 00:28:38.896 }' 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:28:38.896 BaseBdev2 00:28:38.896 BaseBdev3' 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:38.896 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:39.154 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:39.154 "name": "BaseBdev1", 00:28:39.154 "aliases": [ 00:28:39.154 "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6" 00:28:39.154 ], 00:28:39.154 "product_name": "Malloc disk", 00:28:39.154 "block_size": 512, 00:28:39.155 "num_blocks": 65536, 00:28:39.155 "uuid": "34c0a7c4-e989-4bc9-83d6-98cd35bfa6d6", 00:28:39.155 "assigned_rate_limits": { 00:28:39.155 "rw_ios_per_sec": 0, 00:28:39.155 "rw_mbytes_per_sec": 0, 00:28:39.155 "r_mbytes_per_sec": 0, 00:28:39.155 "w_mbytes_per_sec": 0 00:28:39.155 }, 00:28:39.155 "claimed": true, 00:28:39.155 "claim_type": "exclusive_write", 00:28:39.155 "zoned": false, 00:28:39.155 "supported_io_types": { 00:28:39.155 "read": true, 00:28:39.155 "write": true, 00:28:39.155 "unmap": true, 00:28:39.155 "write_zeroes": true, 00:28:39.155 "flush": true, 00:28:39.155 "reset": true, 00:28:39.155 "compare": false, 00:28:39.155 "compare_and_write": false, 00:28:39.155 "abort": true, 00:28:39.155 "nvme_admin": false, 00:28:39.155 "nvme_io": false 00:28:39.155 }, 00:28:39.155 "memory_domains": [ 00:28:39.155 { 00:28:39.155 "dma_device_id": "system", 00:28:39.155 "dma_device_type": 1 00:28:39.155 }, 00:28:39.155 { 00:28:39.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.155 "dma_device_type": 2 00:28:39.155 } 00:28:39.155 ], 00:28:39.155 "driver_specific": {} 00:28:39.155 }' 00:28:39.155 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:39.155 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:39.413 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:39.414 11:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:39.414 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:39.673 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:39.673 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:39.673 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:39.673 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:39.932 "name": "BaseBdev2", 00:28:39.932 "aliases": [ 00:28:39.932 "2981401f-dc76-4aea-9a28-3a4f2398f5d2" 00:28:39.932 ], 00:28:39.932 "product_name": "Malloc disk", 00:28:39.932 "block_size": 512, 00:28:39.932 "num_blocks": 65536, 00:28:39.932 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:39.932 "assigned_rate_limits": { 00:28:39.932 "rw_ios_per_sec": 0, 00:28:39.932 "rw_mbytes_per_sec": 0, 00:28:39.932 "r_mbytes_per_sec": 0, 00:28:39.932 "w_mbytes_per_sec": 0 00:28:39.932 }, 00:28:39.932 "claimed": true, 00:28:39.932 "claim_type": "exclusive_write", 00:28:39.932 "zoned": false, 00:28:39.932 "supported_io_types": { 00:28:39.932 "read": true, 00:28:39.932 "write": true, 00:28:39.932 "unmap": true, 00:28:39.932 "write_zeroes": true, 00:28:39.932 "flush": true, 00:28:39.932 "reset": true, 00:28:39.932 "compare": false, 00:28:39.932 "compare_and_write": false, 00:28:39.932 "abort": true, 00:28:39.932 "nvme_admin": false, 00:28:39.932 "nvme_io": false 00:28:39.932 }, 00:28:39.932 "memory_domains": [ 00:28:39.932 { 00:28:39.932 "dma_device_id": "system", 00:28:39.932 "dma_device_type": 1 00:28:39.932 }, 00:28:39.932 { 00:28:39.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.932 "dma_device_type": 2 00:28:39.932 } 00:28:39.932 ], 00:28:39.932 "driver_specific": {} 00:28:39.932 }' 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:39.932 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:39.933 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:39.933 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:40.191 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:40.449 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:40.449 "name": "BaseBdev3", 00:28:40.449 "aliases": [ 00:28:40.449 "db277649-2910-4af4-83e7-4002d52bf22f" 00:28:40.449 ], 00:28:40.449 "product_name": "Malloc disk", 00:28:40.449 "block_size": 512, 00:28:40.449 "num_blocks": 65536, 00:28:40.449 "uuid": "db277649-2910-4af4-83e7-4002d52bf22f", 00:28:40.449 "assigned_rate_limits": { 00:28:40.449 "rw_ios_per_sec": 0, 00:28:40.449 "rw_mbytes_per_sec": 0, 00:28:40.449 "r_mbytes_per_sec": 0, 00:28:40.449 "w_mbytes_per_sec": 0 00:28:40.449 }, 00:28:40.449 "claimed": true, 00:28:40.449 "claim_type": "exclusive_write", 00:28:40.449 "zoned": false, 00:28:40.449 "supported_io_types": { 00:28:40.449 "read": true, 00:28:40.449 "write": true, 00:28:40.449 "unmap": true, 00:28:40.449 "write_zeroes": true, 00:28:40.449 "flush": true, 00:28:40.449 "reset": true, 00:28:40.449 "compare": false, 00:28:40.449 "compare_and_write": false, 00:28:40.449 "abort": true, 00:28:40.449 "nvme_admin": false, 00:28:40.449 "nvme_io": false 00:28:40.449 }, 00:28:40.449 "memory_domains": [ 00:28:40.449 { 00:28:40.449 "dma_device_id": "system", 00:28:40.449 "dma_device_type": 1 00:28:40.449 }, 00:28:40.449 { 00:28:40.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:40.449 "dma_device_type": 2 00:28:40.449 } 00:28:40.449 ], 00:28:40.449 "driver_specific": {} 00:28:40.449 }' 00:28:40.449 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:40.449 11:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:40.449 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:40.449 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:40.707 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:41.033 [2024-05-15 11:21:59.578517] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:41.290 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.291 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.547 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:41.547 "name": "Existed_Raid", 00:28:41.547 "uuid": "b679538d-4892-45d0-ac2e-755a37d37ee4", 00:28:41.547 "strip_size_kb": 0, 00:28:41.547 "state": "online", 00:28:41.547 "raid_level": "raid1", 00:28:41.547 "superblock": false, 00:28:41.547 "num_base_bdevs": 3, 00:28:41.547 "num_base_bdevs_discovered": 2, 00:28:41.547 "num_base_bdevs_operational": 2, 00:28:41.547 "base_bdevs_list": [ 00:28:41.547 { 00:28:41.547 "name": null, 00:28:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.547 "is_configured": false, 00:28:41.547 "data_offset": 0, 00:28:41.547 "data_size": 65536 00:28:41.547 }, 00:28:41.547 { 00:28:41.547 "name": "BaseBdev2", 00:28:41.547 "uuid": "2981401f-dc76-4aea-9a28-3a4f2398f5d2", 00:28:41.547 "is_configured": true, 00:28:41.547 "data_offset": 0, 00:28:41.547 "data_size": 65536 00:28:41.547 }, 00:28:41.547 { 00:28:41.547 "name": "BaseBdev3", 00:28:41.547 "uuid": "db277649-2910-4af4-83e7-4002d52bf22f", 00:28:41.547 "is_configured": true, 00:28:41.547 "data_offset": 0, 00:28:41.547 "data_size": 65536 00:28:41.547 } 00:28:41.547 ] 00:28:41.547 }' 00:28:41.547 11:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:41.547 11:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.111 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:42.111 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:42.111 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:28:42.111 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.368 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:28:42.368 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:42.368 11:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:42.368 [2024-05-15 11:22:00.992618] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:42.626 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:42.626 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:42.626 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.626 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:28:42.886 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:28:42.886 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:42.886 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:43.143 [2024-05-15 11:22:01.533428] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:43.143 [2024-05-15 11:22:01.533513] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:43.143 [2024-05-15 11:22:01.640539] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:43.143 [2024-05-15 11:22:01.640665] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:43.143 [2024-05-15 11:22:01.640682] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:28:43.143 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:43.143 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:43.143 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.143 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:28:43.401 11:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:43.659 BaseBdev2 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:43.659 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:43.917 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:44.175 [ 00:28:44.175 { 00:28:44.175 "name": "BaseBdev2", 00:28:44.175 "aliases": [ 00:28:44.175 "d9d8a409-5c76-4410-9cc6-21ea6f8976f5" 00:28:44.175 ], 00:28:44.175 "product_name": "Malloc disk", 00:28:44.175 "block_size": 512, 00:28:44.175 "num_blocks": 65536, 00:28:44.175 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:44.175 "assigned_rate_limits": { 00:28:44.175 "rw_ios_per_sec": 0, 00:28:44.175 "rw_mbytes_per_sec": 0, 00:28:44.175 "r_mbytes_per_sec": 0, 00:28:44.175 "w_mbytes_per_sec": 0 00:28:44.175 }, 00:28:44.175 "claimed": false, 00:28:44.175 "zoned": false, 00:28:44.175 "supported_io_types": { 00:28:44.175 "read": true, 00:28:44.175 "write": true, 00:28:44.175 "unmap": true, 00:28:44.175 "write_zeroes": true, 00:28:44.175 "flush": true, 00:28:44.175 "reset": true, 00:28:44.175 "compare": false, 00:28:44.175 "compare_and_write": false, 00:28:44.175 "abort": true, 00:28:44.175 "nvme_admin": false, 00:28:44.175 "nvme_io": false 00:28:44.175 }, 00:28:44.175 "memory_domains": [ 00:28:44.175 { 00:28:44.175 "dma_device_id": "system", 00:28:44.175 "dma_device_type": 1 00:28:44.175 }, 00:28:44.175 { 00:28:44.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.175 "dma_device_type": 2 00:28:44.175 } 00:28:44.175 ], 00:28:44.175 "driver_specific": {} 00:28:44.175 } 00:28:44.175 ] 00:28:44.175 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:44.175 11:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:28:44.175 11:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:28:44.175 11:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:44.433 BaseBdev3 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:44.433 11:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:44.433 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:44.698 [ 00:28:44.698 { 00:28:44.698 "name": "BaseBdev3", 00:28:44.698 "aliases": [ 00:28:44.698 "76d228d6-d309-4c29-96bb-d949ebd0c75f" 00:28:44.698 ], 00:28:44.698 "product_name": "Malloc disk", 00:28:44.698 "block_size": 512, 00:28:44.698 "num_blocks": 65536, 00:28:44.698 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:44.698 "assigned_rate_limits": { 00:28:44.698 "rw_ios_per_sec": 0, 00:28:44.698 "rw_mbytes_per_sec": 0, 00:28:44.698 "r_mbytes_per_sec": 0, 00:28:44.698 "w_mbytes_per_sec": 0 00:28:44.698 }, 00:28:44.698 "claimed": false, 00:28:44.698 "zoned": false, 00:28:44.698 "supported_io_types": { 00:28:44.698 "read": true, 00:28:44.698 "write": true, 00:28:44.698 "unmap": true, 00:28:44.698 "write_zeroes": true, 00:28:44.698 "flush": true, 00:28:44.698 "reset": true, 00:28:44.698 "compare": false, 00:28:44.698 "compare_and_write": false, 00:28:44.698 "abort": true, 00:28:44.698 "nvme_admin": false, 00:28:44.698 "nvme_io": false 00:28:44.698 }, 00:28:44.698 "memory_domains": [ 00:28:44.698 { 00:28:44.698 "dma_device_id": "system", 00:28:44.698 "dma_device_type": 1 00:28:44.698 }, 00:28:44.698 { 00:28:44.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.698 "dma_device_type": 2 00:28:44.698 } 00:28:44.698 ], 00:28:44.698 "driver_specific": {} 00:28:44.698 } 00:28:44.698 ] 00:28:44.698 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:44.698 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:28:44.698 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:28:44.698 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:44.956 [2024-05-15 11:22:03.484227] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:44.956 [2024-05-15 11:22:03.484326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:44.956 [2024-05-15 11:22:03.484374] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:44.956 [2024-05-15 11:22:03.485733] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.956 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:45.214 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:45.214 "name": "Existed_Raid", 00:28:45.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.214 "strip_size_kb": 0, 00:28:45.214 "state": "configuring", 00:28:45.214 "raid_level": "raid1", 00:28:45.214 "superblock": false, 00:28:45.214 "num_base_bdevs": 3, 00:28:45.214 "num_base_bdevs_discovered": 2, 00:28:45.214 "num_base_bdevs_operational": 3, 00:28:45.214 "base_bdevs_list": [ 00:28:45.214 { 00:28:45.214 "name": "BaseBdev1", 00:28:45.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.214 "is_configured": false, 00:28:45.214 "data_offset": 0, 00:28:45.214 "data_size": 0 00:28:45.214 }, 00:28:45.214 { 00:28:45.214 "name": "BaseBdev2", 00:28:45.214 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:45.214 "is_configured": true, 00:28:45.214 "data_offset": 0, 00:28:45.214 "data_size": 65536 00:28:45.214 }, 00:28:45.214 { 00:28:45.214 "name": "BaseBdev3", 00:28:45.214 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:45.214 "is_configured": true, 00:28:45.214 "data_offset": 0, 00:28:45.214 "data_size": 65536 00:28:45.214 } 00:28:45.214 ] 00:28:45.214 }' 00:28:45.214 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:45.214 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:46.147 [2024-05-15 11:22:04.680397] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:46.147 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:46.148 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:46.148 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:46.148 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:46.148 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.148 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:46.404 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:46.404 "name": "Existed_Raid", 00:28:46.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.404 "strip_size_kb": 0, 00:28:46.404 "state": "configuring", 00:28:46.404 "raid_level": "raid1", 00:28:46.404 "superblock": false, 00:28:46.404 "num_base_bdevs": 3, 00:28:46.404 "num_base_bdevs_discovered": 1, 00:28:46.404 "num_base_bdevs_operational": 3, 00:28:46.404 "base_bdevs_list": [ 00:28:46.404 { 00:28:46.404 "name": "BaseBdev1", 00:28:46.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.404 "is_configured": false, 00:28:46.404 "data_offset": 0, 00:28:46.404 "data_size": 0 00:28:46.404 }, 00:28:46.404 { 00:28:46.404 "name": null, 00:28:46.404 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:46.404 "is_configured": false, 00:28:46.404 "data_offset": 0, 00:28:46.404 "data_size": 65536 00:28:46.404 }, 00:28:46.404 { 00:28:46.404 "name": "BaseBdev3", 00:28:46.404 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:46.404 "is_configured": true, 00:28:46.404 "data_offset": 0, 00:28:46.404 "data_size": 65536 00:28:46.404 } 00:28:46.404 ] 00:28:46.404 }' 00:28:46.404 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:46.404 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.337 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.337 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:47.337 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:28:47.337 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:47.595 [2024-05-15 11:22:06.100478] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.595 BaseBdev1 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:47.595 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:47.878 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:48.136 [ 00:28:48.136 { 00:28:48.136 "name": "BaseBdev1", 00:28:48.136 "aliases": [ 00:28:48.136 "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6" 00:28:48.136 ], 00:28:48.136 "product_name": "Malloc disk", 00:28:48.136 "block_size": 512, 00:28:48.136 "num_blocks": 65536, 00:28:48.136 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:48.136 "assigned_rate_limits": { 00:28:48.136 "rw_ios_per_sec": 0, 00:28:48.136 "rw_mbytes_per_sec": 0, 00:28:48.136 "r_mbytes_per_sec": 0, 00:28:48.136 "w_mbytes_per_sec": 0 00:28:48.136 }, 00:28:48.136 "claimed": true, 00:28:48.136 "claim_type": "exclusive_write", 00:28:48.136 "zoned": false, 00:28:48.136 "supported_io_types": { 00:28:48.136 "read": true, 00:28:48.136 "write": true, 00:28:48.136 "unmap": true, 00:28:48.136 "write_zeroes": true, 00:28:48.136 "flush": true, 00:28:48.136 "reset": true, 00:28:48.136 "compare": false, 00:28:48.136 "compare_and_write": false, 00:28:48.136 "abort": true, 00:28:48.136 "nvme_admin": false, 00:28:48.136 "nvme_io": false 00:28:48.136 }, 00:28:48.136 "memory_domains": [ 00:28:48.136 { 00:28:48.136 "dma_device_id": "system", 00:28:48.136 "dma_device_type": 1 00:28:48.136 }, 00:28:48.136 { 00:28:48.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.136 "dma_device_type": 2 00:28:48.136 } 00:28:48.136 ], 00:28:48.136 "driver_specific": {} 00:28:48.136 } 00:28:48.136 ] 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.136 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:48.393 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:48.394 "name": "Existed_Raid", 00:28:48.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.394 "strip_size_kb": 0, 00:28:48.394 "state": "configuring", 00:28:48.394 "raid_level": "raid1", 00:28:48.394 "superblock": false, 00:28:48.394 "num_base_bdevs": 3, 00:28:48.394 "num_base_bdevs_discovered": 2, 00:28:48.394 "num_base_bdevs_operational": 3, 00:28:48.394 "base_bdevs_list": [ 00:28:48.394 { 00:28:48.394 "name": "BaseBdev1", 00:28:48.394 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:48.394 "is_configured": true, 00:28:48.394 "data_offset": 0, 00:28:48.394 "data_size": 65536 00:28:48.394 }, 00:28:48.394 { 00:28:48.394 "name": null, 00:28:48.394 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:48.394 "is_configured": false, 00:28:48.394 "data_offset": 0, 00:28:48.394 "data_size": 65536 00:28:48.394 }, 00:28:48.394 { 00:28:48.394 "name": "BaseBdev3", 00:28:48.394 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:48.394 "is_configured": true, 00:28:48.394 "data_offset": 0, 00:28:48.394 "data_size": 65536 00:28:48.394 } 00:28:48.394 ] 00:28:48.394 }' 00:28:48.394 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:48.394 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.959 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.959 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:49.216 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:49.216 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:49.475 [2024-05-15 11:22:07.876799] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:49.475 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.475 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:49.475 "name": "Existed_Raid", 00:28:49.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.475 "strip_size_kb": 0, 00:28:49.475 "state": "configuring", 00:28:49.475 "raid_level": "raid1", 00:28:49.475 "superblock": false, 00:28:49.475 "num_base_bdevs": 3, 00:28:49.475 "num_base_bdevs_discovered": 1, 00:28:49.475 "num_base_bdevs_operational": 3, 00:28:49.475 "base_bdevs_list": [ 00:28:49.475 { 00:28:49.475 "name": "BaseBdev1", 00:28:49.475 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:49.475 "is_configured": true, 00:28:49.475 "data_offset": 0, 00:28:49.475 "data_size": 65536 00:28:49.475 }, 00:28:49.475 { 00:28:49.475 "name": null, 00:28:49.475 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:49.475 "is_configured": false, 00:28:49.475 "data_offset": 0, 00:28:49.475 "data_size": 65536 00:28:49.475 }, 00:28:49.475 { 00:28:49.475 "name": null, 00:28:49.475 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:49.475 "is_configured": false, 00:28:49.475 "data_offset": 0, 00:28:49.475 "data_size": 65536 00:28:49.475 } 00:28:49.475 ] 00:28:49.475 }' 00:28:49.475 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:49.475 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.407 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.407 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:50.665 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:28:50.665 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:50.925 [2024-05-15 11:22:09.316990] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:50.925 "name": "Existed_Raid", 00:28:50.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.925 "strip_size_kb": 0, 00:28:50.925 "state": "configuring", 00:28:50.925 "raid_level": "raid1", 00:28:50.925 "superblock": false, 00:28:50.925 "num_base_bdevs": 3, 00:28:50.925 "num_base_bdevs_discovered": 2, 00:28:50.925 "num_base_bdevs_operational": 3, 00:28:50.925 "base_bdevs_list": [ 00:28:50.925 { 00:28:50.925 "name": "BaseBdev1", 00:28:50.925 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:50.925 "is_configured": true, 00:28:50.925 "data_offset": 0, 00:28:50.925 "data_size": 65536 00:28:50.925 }, 00:28:50.925 { 00:28:50.925 "name": null, 00:28:50.925 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:50.925 "is_configured": false, 00:28:50.925 "data_offset": 0, 00:28:50.925 "data_size": 65536 00:28:50.925 }, 00:28:50.925 { 00:28:50.925 "name": "BaseBdev3", 00:28:50.925 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:50.925 "is_configured": true, 00:28:50.925 "data_offset": 0, 00:28:50.925 "data_size": 65536 00:28:50.925 } 00:28:50.925 ] 00:28:50.925 }' 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:50.925 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.860 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.860 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:51.860 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:28:51.860 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:52.118 [2024-05-15 11:22:10.661258] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:52.377 "name": "Existed_Raid", 00:28:52.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.377 "strip_size_kb": 0, 00:28:52.377 "state": "configuring", 00:28:52.377 "raid_level": "raid1", 00:28:52.377 "superblock": false, 00:28:52.377 "num_base_bdevs": 3, 00:28:52.377 "num_base_bdevs_discovered": 1, 00:28:52.377 "num_base_bdevs_operational": 3, 00:28:52.377 "base_bdevs_list": [ 00:28:52.377 { 00:28:52.377 "name": null, 00:28:52.377 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:52.377 "is_configured": false, 00:28:52.377 "data_offset": 0, 00:28:52.377 "data_size": 65536 00:28:52.377 }, 00:28:52.377 { 00:28:52.377 "name": null, 00:28:52.377 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:52.377 "is_configured": false, 00:28:52.377 "data_offset": 0, 00:28:52.377 "data_size": 65536 00:28:52.377 }, 00:28:52.377 { 00:28:52.377 "name": "BaseBdev3", 00:28:52.377 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:52.377 "is_configured": true, 00:28:52.377 "data_offset": 0, 00:28:52.377 "data_size": 65536 00:28:52.377 } 00:28:52.377 ] 00:28:52.377 }' 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:52.377 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.311 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:53.311 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:28:53.311 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:53.570 [2024-05-15 11:22:12.019669] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.570 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:53.829 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:53.829 "name": "Existed_Raid", 00:28:53.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.829 "strip_size_kb": 0, 00:28:53.829 "state": "configuring", 00:28:53.829 "raid_level": "raid1", 00:28:53.829 "superblock": false, 00:28:53.829 "num_base_bdevs": 3, 00:28:53.829 "num_base_bdevs_discovered": 2, 00:28:53.829 "num_base_bdevs_operational": 3, 00:28:53.829 "base_bdevs_list": [ 00:28:53.829 { 00:28:53.829 "name": null, 00:28:53.829 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:53.829 "is_configured": false, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 }, 00:28:53.829 { 00:28:53.829 "name": "BaseBdev2", 00:28:53.829 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 }, 00:28:53.829 { 00:28:53.829 "name": "BaseBdev3", 00:28:53.829 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 } 00:28:53.829 ] 00:28:53.829 }' 00:28:53.829 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:53.829 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.762 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.762 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:54.762 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:28:54.762 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.762 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:55.020 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6 00:28:55.278 [2024-05-15 11:22:13.871075] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:55.279 [2024-05-15 11:22:13.871134] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:28:55.279 [2024-05-15 11:22:13.871155] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:55.279 [2024-05-15 11:22:13.871271] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:55.279 [2024-05-15 11:22:13.871515] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:28:55.279 [2024-05-15 11:22:13.871532] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:28:55.279 [2024-05-15 11:22:13.871724] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.279 NewBaseBdev 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:55.279 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:55.536 11:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:55.794 [ 00:28:55.794 { 00:28:55.794 "name": "NewBaseBdev", 00:28:55.794 "aliases": [ 00:28:55.794 "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6" 00:28:55.794 ], 00:28:55.794 "product_name": "Malloc disk", 00:28:55.794 "block_size": 512, 00:28:55.794 "num_blocks": 65536, 00:28:55.794 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:55.794 "assigned_rate_limits": { 00:28:55.794 "rw_ios_per_sec": 0, 00:28:55.794 "rw_mbytes_per_sec": 0, 00:28:55.794 "r_mbytes_per_sec": 0, 00:28:55.794 "w_mbytes_per_sec": 0 00:28:55.794 }, 00:28:55.794 "claimed": true, 00:28:55.794 "claim_type": "exclusive_write", 00:28:55.794 "zoned": false, 00:28:55.794 "supported_io_types": { 00:28:55.794 "read": true, 00:28:55.794 "write": true, 00:28:55.794 "unmap": true, 00:28:55.794 "write_zeroes": true, 00:28:55.794 "flush": true, 00:28:55.794 "reset": true, 00:28:55.794 "compare": false, 00:28:55.794 "compare_and_write": false, 00:28:55.794 "abort": true, 00:28:55.794 "nvme_admin": false, 00:28:55.794 "nvme_io": false 00:28:55.794 }, 00:28:55.794 "memory_domains": [ 00:28:55.794 { 00:28:55.794 "dma_device_id": "system", 00:28:55.794 "dma_device_type": 1 00:28:55.794 }, 00:28:55.794 { 00:28:55.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.794 "dma_device_type": 2 00:28:55.794 } 00:28:55.794 ], 00:28:55.794 "driver_specific": {} 00:28:55.794 } 00:28:55.794 ] 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.794 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:56.053 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:56.053 "name": "Existed_Raid", 00:28:56.053 "uuid": "013f3b48-e3b9-4c26-bf8b-0f7bf1693f16", 00:28:56.053 "strip_size_kb": 0, 00:28:56.053 "state": "online", 00:28:56.053 "raid_level": "raid1", 00:28:56.053 "superblock": false, 00:28:56.053 "num_base_bdevs": 3, 00:28:56.053 "num_base_bdevs_discovered": 3, 00:28:56.053 "num_base_bdevs_operational": 3, 00:28:56.053 "base_bdevs_list": [ 00:28:56.053 { 00:28:56.053 "name": "NewBaseBdev", 00:28:56.053 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:56.053 "is_configured": true, 00:28:56.053 "data_offset": 0, 00:28:56.053 "data_size": 65536 00:28:56.053 }, 00:28:56.053 { 00:28:56.053 "name": "BaseBdev2", 00:28:56.053 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:56.053 "is_configured": true, 00:28:56.053 "data_offset": 0, 00:28:56.053 "data_size": 65536 00:28:56.053 }, 00:28:56.053 { 00:28:56.053 "name": "BaseBdev3", 00:28:56.053 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:56.053 "is_configured": true, 00:28:56.053 "data_offset": 0, 00:28:56.053 "data_size": 65536 00:28:56.053 } 00:28:56.053 ] 00:28:56.053 }' 00:28:56.053 11:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:56.053 11:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:28:56.620 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:56.882 [2024-05-15 11:22:15.471550] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:56.882 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:28:56.882 "name": "Existed_Raid", 00:28:56.882 "aliases": [ 00:28:56.882 "013f3b48-e3b9-4c26-bf8b-0f7bf1693f16" 00:28:56.882 ], 00:28:56.882 "product_name": "Raid Volume", 00:28:56.882 "block_size": 512, 00:28:56.882 "num_blocks": 65536, 00:28:56.882 "uuid": "013f3b48-e3b9-4c26-bf8b-0f7bf1693f16", 00:28:56.882 "assigned_rate_limits": { 00:28:56.882 "rw_ios_per_sec": 0, 00:28:56.882 "rw_mbytes_per_sec": 0, 00:28:56.882 "r_mbytes_per_sec": 0, 00:28:56.882 "w_mbytes_per_sec": 0 00:28:56.882 }, 00:28:56.882 "claimed": false, 00:28:56.882 "zoned": false, 00:28:56.882 "supported_io_types": { 00:28:56.882 "read": true, 00:28:56.882 "write": true, 00:28:56.882 "unmap": false, 00:28:56.882 "write_zeroes": true, 00:28:56.882 "flush": false, 00:28:56.882 "reset": true, 00:28:56.882 "compare": false, 00:28:56.882 "compare_and_write": false, 00:28:56.882 "abort": false, 00:28:56.882 "nvme_admin": false, 00:28:56.882 "nvme_io": false 00:28:56.882 }, 00:28:56.882 "memory_domains": [ 00:28:56.882 { 00:28:56.882 "dma_device_id": "system", 00:28:56.882 "dma_device_type": 1 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.882 "dma_device_type": 2 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "dma_device_id": "system", 00:28:56.882 "dma_device_type": 1 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.882 "dma_device_type": 2 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "dma_device_id": "system", 00:28:56.882 "dma_device_type": 1 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.882 "dma_device_type": 2 00:28:56.882 } 00:28:56.882 ], 00:28:56.882 "driver_specific": { 00:28:56.882 "raid": { 00:28:56.882 "uuid": "013f3b48-e3b9-4c26-bf8b-0f7bf1693f16", 00:28:56.882 "strip_size_kb": 0, 00:28:56.882 "state": "online", 00:28:56.882 "raid_level": "raid1", 00:28:56.882 "superblock": false, 00:28:56.882 "num_base_bdevs": 3, 00:28:56.882 "num_base_bdevs_discovered": 3, 00:28:56.882 "num_base_bdevs_operational": 3, 00:28:56.882 "base_bdevs_list": [ 00:28:56.882 { 00:28:56.882 "name": "NewBaseBdev", 00:28:56.882 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:56.882 "is_configured": true, 00:28:56.882 "data_offset": 0, 00:28:56.882 "data_size": 65536 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "name": "BaseBdev2", 00:28:56.882 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:56.882 "is_configured": true, 00:28:56.882 "data_offset": 0, 00:28:56.882 "data_size": 65536 00:28:56.882 }, 00:28:56.882 { 00:28:56.882 "name": "BaseBdev3", 00:28:56.882 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:56.882 "is_configured": true, 00:28:56.883 "data_offset": 0, 00:28:56.883 "data_size": 65536 00:28:56.883 } 00:28:56.883 ] 00:28:56.883 } 00:28:56.883 } 00:28:56.883 }' 00:28:56.883 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:57.140 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:28:57.140 BaseBdev2 00:28:57.140 BaseBdev3' 00:28:57.140 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:57.140 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:57.140 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:57.399 "name": "NewBaseBdev", 00:28:57.399 "aliases": [ 00:28:57.399 "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6" 00:28:57.399 ], 00:28:57.399 "product_name": "Malloc disk", 00:28:57.399 "block_size": 512, 00:28:57.399 "num_blocks": 65536, 00:28:57.399 "uuid": "d19b84b4-45a2-4fd7-8d8c-2cfb7f22d6f6", 00:28:57.399 "assigned_rate_limits": { 00:28:57.399 "rw_ios_per_sec": 0, 00:28:57.399 "rw_mbytes_per_sec": 0, 00:28:57.399 "r_mbytes_per_sec": 0, 00:28:57.399 "w_mbytes_per_sec": 0 00:28:57.399 }, 00:28:57.399 "claimed": true, 00:28:57.399 "claim_type": "exclusive_write", 00:28:57.399 "zoned": false, 00:28:57.399 "supported_io_types": { 00:28:57.399 "read": true, 00:28:57.399 "write": true, 00:28:57.399 "unmap": true, 00:28:57.399 "write_zeroes": true, 00:28:57.399 "flush": true, 00:28:57.399 "reset": true, 00:28:57.399 "compare": false, 00:28:57.399 "compare_and_write": false, 00:28:57.399 "abort": true, 00:28:57.399 "nvme_admin": false, 00:28:57.399 "nvme_io": false 00:28:57.399 }, 00:28:57.399 "memory_domains": [ 00:28:57.399 { 00:28:57.399 "dma_device_id": "system", 00:28:57.399 "dma_device_type": 1 00:28:57.399 }, 00:28:57.399 { 00:28:57.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.399 "dma_device_type": 2 00:28:57.399 } 00:28:57.399 ], 00:28:57.399 "driver_specific": {} 00:28:57.399 }' 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:57.399 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:57.657 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:57.658 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:57.658 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:57.915 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:57.915 "name": "BaseBdev2", 00:28:57.915 "aliases": [ 00:28:57.916 "d9d8a409-5c76-4410-9cc6-21ea6f8976f5" 00:28:57.916 ], 00:28:57.916 "product_name": "Malloc disk", 00:28:57.916 "block_size": 512, 00:28:57.916 "num_blocks": 65536, 00:28:57.916 "uuid": "d9d8a409-5c76-4410-9cc6-21ea6f8976f5", 00:28:57.916 "assigned_rate_limits": { 00:28:57.916 "rw_ios_per_sec": 0, 00:28:57.916 "rw_mbytes_per_sec": 0, 00:28:57.916 "r_mbytes_per_sec": 0, 00:28:57.916 "w_mbytes_per_sec": 0 00:28:57.916 }, 00:28:57.916 "claimed": true, 00:28:57.916 "claim_type": "exclusive_write", 00:28:57.916 "zoned": false, 00:28:57.916 "supported_io_types": { 00:28:57.916 "read": true, 00:28:57.916 "write": true, 00:28:57.916 "unmap": true, 00:28:57.916 "write_zeroes": true, 00:28:57.916 "flush": true, 00:28:57.916 "reset": true, 00:28:57.916 "compare": false, 00:28:57.916 "compare_and_write": false, 00:28:57.916 "abort": true, 00:28:57.916 "nvme_admin": false, 00:28:57.916 "nvme_io": false 00:28:57.916 }, 00:28:57.916 "memory_domains": [ 00:28:57.916 { 00:28:57.916 "dma_device_id": "system", 00:28:57.916 "dma_device_type": 1 00:28:57.916 }, 00:28:57.916 { 00:28:57.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.916 "dma_device_type": 2 00:28:57.916 } 00:28:57.916 ], 00:28:57.916 "driver_specific": {} 00:28:57.916 }' 00:28:57.916 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.174 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:58.432 11:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:28:58.432 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:28:58.432 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:58.690 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:28:58.690 "name": "BaseBdev3", 00:28:58.690 "aliases": [ 00:28:58.690 "76d228d6-d309-4c29-96bb-d949ebd0c75f" 00:28:58.690 ], 00:28:58.690 "product_name": "Malloc disk", 00:28:58.690 "block_size": 512, 00:28:58.690 "num_blocks": 65536, 00:28:58.690 "uuid": "76d228d6-d309-4c29-96bb-d949ebd0c75f", 00:28:58.690 "assigned_rate_limits": { 00:28:58.690 "rw_ios_per_sec": 0, 00:28:58.690 "rw_mbytes_per_sec": 0, 00:28:58.690 "r_mbytes_per_sec": 0, 00:28:58.690 "w_mbytes_per_sec": 0 00:28:58.690 }, 00:28:58.690 "claimed": true, 00:28:58.690 "claim_type": "exclusive_write", 00:28:58.690 "zoned": false, 00:28:58.690 "supported_io_types": { 00:28:58.690 "read": true, 00:28:58.690 "write": true, 00:28:58.690 "unmap": true, 00:28:58.690 "write_zeroes": true, 00:28:58.690 "flush": true, 00:28:58.690 "reset": true, 00:28:58.690 "compare": false, 00:28:58.690 "compare_and_write": false, 00:28:58.690 "abort": true, 00:28:58.690 "nvme_admin": false, 00:28:58.690 "nvme_io": false 00:28:58.690 }, 00:28:58.690 "memory_domains": [ 00:28:58.690 { 00:28:58.690 "dma_device_id": "system", 00:28:58.690 "dma_device_type": 1 00:28:58.690 }, 00:28:58.690 { 00:28:58.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.690 "dma_device_type": 2 00:28:58.690 } 00:28:58.690 ], 00:28:58.690 "driver_specific": {} 00:28:58.690 }' 00:28:58.690 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:58.948 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:28:59.206 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:59.206 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:59.206 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:28:59.206 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:28:59.206 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:59.464 [2024-05-15 11:22:17.927640] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:59.464 [2024-05-15 11:22:17.927687] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:59.464 [2024-05-15 11:22:17.927755] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.464 [2024-05-15 11:22:17.928208] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:59.464 [2024-05-15 11:22:17.928232] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 61674 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 61674 ']' 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 61674 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61674 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:59.464 killing process with pid 61674 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61674' 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 61674 00:28:59.464 11:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 61674 00:28:59.464 [2024-05-15 11:22:17.961121] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:59.723 [2024-05-15 11:22:18.216623] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:29:01.105 00:29:01.105 real 0m31.303s 00:29:01.105 user 0m58.901s 00:29:01.105 sys 0m3.061s 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.105 ************************************ 00:29:01.105 END TEST raid_state_function_test 00:29:01.105 ************************************ 00:29:01.105 11:22:19 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:29:01.105 11:22:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:29:01.105 11:22:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:01.105 11:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:01.105 ************************************ 00:29:01.105 START TEST raid_state_function_test_sb 00:29:01.105 ************************************ 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=62680 00:29:01.105 Process raid pid: 62680 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 62680' 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 62680 /var/tmp/spdk-raid.sock 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 62680 ']' 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:01.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:01.105 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.105 [2024-05-15 11:22:19.645398] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:29:01.105 [2024-05-15 11:22:19.645611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.364 [2024-05-15 11:22:19.811399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.636 [2024-05-15 11:22:20.060296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.636 [2024-05-15 11:22:20.260194] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.895 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.895 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:29:01.895 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:29:02.153 [2024-05-15 11:22:20.711401] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:02.153 [2024-05-15 11:22:20.711511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:02.153 [2024-05-15 11:22:20.711529] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:02.153 [2024-05-15 11:22:20.711550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:02.153 [2024-05-15 11:22:20.711559] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:02.153 [2024-05-15 11:22:20.711606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.153 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.420 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:02.420 "name": "Existed_Raid", 00:29:02.420 "uuid": "e507297a-b6f3-448f-a2f1-eccd2b865ebe", 00:29:02.420 "strip_size_kb": 0, 00:29:02.420 "state": "configuring", 00:29:02.420 "raid_level": "raid1", 00:29:02.420 "superblock": true, 00:29:02.420 "num_base_bdevs": 3, 00:29:02.420 "num_base_bdevs_discovered": 0, 00:29:02.420 "num_base_bdevs_operational": 3, 00:29:02.420 "base_bdevs_list": [ 00:29:02.420 { 00:29:02.420 "name": "BaseBdev1", 00:29:02.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.420 "is_configured": false, 00:29:02.420 "data_offset": 0, 00:29:02.420 "data_size": 0 00:29:02.420 }, 00:29:02.420 { 00:29:02.420 "name": "BaseBdev2", 00:29:02.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.420 "is_configured": false, 00:29:02.420 "data_offset": 0, 00:29:02.420 "data_size": 0 00:29:02.420 }, 00:29:02.420 { 00:29:02.420 "name": "BaseBdev3", 00:29:02.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.420 "is_configured": false, 00:29:02.420 "data_offset": 0, 00:29:02.420 "data_size": 0 00:29:02.420 } 00:29:02.420 ] 00:29:02.420 }' 00:29:02.420 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:02.420 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.986 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:03.244 [2024-05-15 11:22:21.819388] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:03.244 [2024-05-15 11:22:21.819473] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:29:03.244 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:29:03.503 [2024-05-15 11:22:22.015428] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:03.503 [2024-05-15 11:22:22.015515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:03.503 [2024-05-15 11:22:22.015531] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:03.503 [2024-05-15 11:22:22.015563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:03.503 [2024-05-15 11:22:22.015573] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:03.503 [2024-05-15 11:22:22.015599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:03.503 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:03.762 BaseBdev1 00:29:03.762 [2024-05-15 11:22:22.352106] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:03.762 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:04.021 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:04.280 [ 00:29:04.280 { 00:29:04.280 "name": "BaseBdev1", 00:29:04.280 "aliases": [ 00:29:04.280 "92ccf663-2b1e-4839-b14f-0402e4772c44" 00:29:04.280 ], 00:29:04.280 "product_name": "Malloc disk", 00:29:04.280 "block_size": 512, 00:29:04.280 "num_blocks": 65536, 00:29:04.280 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:04.280 "assigned_rate_limits": { 00:29:04.280 "rw_ios_per_sec": 0, 00:29:04.280 "rw_mbytes_per_sec": 0, 00:29:04.280 "r_mbytes_per_sec": 0, 00:29:04.280 "w_mbytes_per_sec": 0 00:29:04.280 }, 00:29:04.280 "claimed": true, 00:29:04.280 "claim_type": "exclusive_write", 00:29:04.280 "zoned": false, 00:29:04.280 "supported_io_types": { 00:29:04.280 "read": true, 00:29:04.280 "write": true, 00:29:04.280 "unmap": true, 00:29:04.280 "write_zeroes": true, 00:29:04.280 "flush": true, 00:29:04.280 "reset": true, 00:29:04.280 "compare": false, 00:29:04.280 "compare_and_write": false, 00:29:04.280 "abort": true, 00:29:04.280 "nvme_admin": false, 00:29:04.280 "nvme_io": false 00:29:04.280 }, 00:29:04.280 "memory_domains": [ 00:29:04.280 { 00:29:04.280 "dma_device_id": "system", 00:29:04.280 "dma_device_type": 1 00:29:04.280 }, 00:29:04.280 { 00:29:04.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.280 "dma_device_type": 2 00:29:04.280 } 00:29:04.280 ], 00:29:04.280 "driver_specific": {} 00:29:04.280 } 00:29:04.280 ] 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.280 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:04.538 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:04.538 "name": "Existed_Raid", 00:29:04.538 "uuid": "eb4922eb-0149-4b65-9432-f5e6a5bc8d21", 00:29:04.538 "strip_size_kb": 0, 00:29:04.538 "state": "configuring", 00:29:04.538 "raid_level": "raid1", 00:29:04.538 "superblock": true, 00:29:04.538 "num_base_bdevs": 3, 00:29:04.538 "num_base_bdevs_discovered": 1, 00:29:04.538 "num_base_bdevs_operational": 3, 00:29:04.538 "base_bdevs_list": [ 00:29:04.538 { 00:29:04.538 "name": "BaseBdev1", 00:29:04.538 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:04.538 "is_configured": true, 00:29:04.538 "data_offset": 2048, 00:29:04.538 "data_size": 63488 00:29:04.538 }, 00:29:04.538 { 00:29:04.538 "name": "BaseBdev2", 00:29:04.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.538 "is_configured": false, 00:29:04.538 "data_offset": 0, 00:29:04.538 "data_size": 0 00:29:04.538 }, 00:29:04.538 { 00:29:04.538 "name": "BaseBdev3", 00:29:04.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.538 "is_configured": false, 00:29:04.538 "data_offset": 0, 00:29:04.538 "data_size": 0 00:29:04.538 } 00:29:04.538 ] 00:29:04.538 }' 00:29:04.538 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:04.538 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.471 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:05.471 [2024-05-15 11:22:23.964329] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:05.471 [2024-05-15 11:22:23.964391] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:29:05.471 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:29:05.730 [2024-05-15 11:22:24.152422] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:05.730 [2024-05-15 11:22:24.153991] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:05.730 [2024-05-15 11:22:24.154051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:05.730 [2024-05-15 11:22:24.154066] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:05.730 [2024-05-15 11:22:24.154093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:05.730 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.989 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:05.989 "name": "Existed_Raid", 00:29:05.989 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:05.989 "strip_size_kb": 0, 00:29:05.989 "state": "configuring", 00:29:05.989 "raid_level": "raid1", 00:29:05.989 "superblock": true, 00:29:05.989 "num_base_bdevs": 3, 00:29:05.989 "num_base_bdevs_discovered": 1, 00:29:05.989 "num_base_bdevs_operational": 3, 00:29:05.989 "base_bdevs_list": [ 00:29:05.989 { 00:29:05.989 "name": "BaseBdev1", 00:29:05.989 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:05.989 "is_configured": true, 00:29:05.989 "data_offset": 2048, 00:29:05.989 "data_size": 63488 00:29:05.989 }, 00:29:05.989 { 00:29:05.989 "name": "BaseBdev2", 00:29:05.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.989 "is_configured": false, 00:29:05.989 "data_offset": 0, 00:29:05.989 "data_size": 0 00:29:05.989 }, 00:29:05.989 { 00:29:05.989 "name": "BaseBdev3", 00:29:05.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.989 "is_configured": false, 00:29:05.989 "data_offset": 0, 00:29:05.989 "data_size": 0 00:29:05.989 } 00:29:05.989 ] 00:29:05.989 }' 00:29:05.989 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:05.989 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.557 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:29:06.816 [2024-05-15 11:22:25.368968] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:06.816 BaseBdev2 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:06.816 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:07.074 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:07.333 [ 00:29:07.333 { 00:29:07.333 "name": "BaseBdev2", 00:29:07.333 "aliases": [ 00:29:07.333 "af5d7e90-abbd-446e-ba83-3e8d98fb5188" 00:29:07.333 ], 00:29:07.333 "product_name": "Malloc disk", 00:29:07.333 "block_size": 512, 00:29:07.333 "num_blocks": 65536, 00:29:07.333 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:07.333 "assigned_rate_limits": { 00:29:07.333 "rw_ios_per_sec": 0, 00:29:07.333 "rw_mbytes_per_sec": 0, 00:29:07.333 "r_mbytes_per_sec": 0, 00:29:07.333 "w_mbytes_per_sec": 0 00:29:07.333 }, 00:29:07.333 "claimed": true, 00:29:07.333 "claim_type": "exclusive_write", 00:29:07.333 "zoned": false, 00:29:07.333 "supported_io_types": { 00:29:07.333 "read": true, 00:29:07.333 "write": true, 00:29:07.333 "unmap": true, 00:29:07.333 "write_zeroes": true, 00:29:07.333 "flush": true, 00:29:07.333 "reset": true, 00:29:07.333 "compare": false, 00:29:07.333 "compare_and_write": false, 00:29:07.333 "abort": true, 00:29:07.333 "nvme_admin": false, 00:29:07.333 "nvme_io": false 00:29:07.333 }, 00:29:07.333 "memory_domains": [ 00:29:07.333 { 00:29:07.333 "dma_device_id": "system", 00:29:07.333 "dma_device_type": 1 00:29:07.333 }, 00:29:07.333 { 00:29:07.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.333 "dma_device_type": 2 00:29:07.333 } 00:29:07.333 ], 00:29:07.333 "driver_specific": {} 00:29:07.333 } 00:29:07.333 ] 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:07.591 11:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:07.591 "name": "Existed_Raid", 00:29:07.591 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:07.591 "strip_size_kb": 0, 00:29:07.591 "state": "configuring", 00:29:07.591 "raid_level": "raid1", 00:29:07.591 "superblock": true, 00:29:07.591 "num_base_bdevs": 3, 00:29:07.591 "num_base_bdevs_discovered": 2, 00:29:07.591 "num_base_bdevs_operational": 3, 00:29:07.591 "base_bdevs_list": [ 00:29:07.591 { 00:29:07.591 "name": "BaseBdev1", 00:29:07.591 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:07.591 "is_configured": true, 00:29:07.591 "data_offset": 2048, 00:29:07.591 "data_size": 63488 00:29:07.591 }, 00:29:07.591 { 00:29:07.591 "name": "BaseBdev2", 00:29:07.591 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:07.591 "is_configured": true, 00:29:07.591 "data_offset": 2048, 00:29:07.591 "data_size": 63488 00:29:07.591 }, 00:29:07.591 { 00:29:07.591 "name": "BaseBdev3", 00:29:07.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.591 "is_configured": false, 00:29:07.591 "data_offset": 0, 00:29:07.591 "data_size": 0 00:29:07.591 } 00:29:07.591 ] 00:29:07.591 }' 00:29:07.591 11:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:07.591 11:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.525 11:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:29:08.525 [2024-05-15 11:22:27.068451] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:08.525 [2024-05-15 11:22:27.068696] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:29:08.525 [2024-05-15 11:22:27.068715] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:08.525 BaseBdev3 00:29:08.525 [2024-05-15 11:22:27.068820] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:29:08.525 [2024-05-15 11:22:27.069334] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:29:08.525 [2024-05-15 11:22:27.069351] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:29:08.525 [2024-05-15 11:22:27.069459] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:08.525 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:29:08.525 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:29:08.525 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:08.526 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:08.526 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:08.526 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:08.526 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:08.784 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:09.042 [ 00:29:09.042 { 00:29:09.042 "name": "BaseBdev3", 00:29:09.042 "aliases": [ 00:29:09.042 "470af117-5bbe-4bc3-a5ec-4a8fbda902d8" 00:29:09.042 ], 00:29:09.042 "product_name": "Malloc disk", 00:29:09.042 "block_size": 512, 00:29:09.042 "num_blocks": 65536, 00:29:09.042 "uuid": "470af117-5bbe-4bc3-a5ec-4a8fbda902d8", 00:29:09.042 "assigned_rate_limits": { 00:29:09.042 "rw_ios_per_sec": 0, 00:29:09.042 "rw_mbytes_per_sec": 0, 00:29:09.042 "r_mbytes_per_sec": 0, 00:29:09.042 "w_mbytes_per_sec": 0 00:29:09.042 }, 00:29:09.042 "claimed": true, 00:29:09.042 "claim_type": "exclusive_write", 00:29:09.042 "zoned": false, 00:29:09.042 "supported_io_types": { 00:29:09.042 "read": true, 00:29:09.042 "write": true, 00:29:09.042 "unmap": true, 00:29:09.042 "write_zeroes": true, 00:29:09.042 "flush": true, 00:29:09.042 "reset": true, 00:29:09.042 "compare": false, 00:29:09.042 "compare_and_write": false, 00:29:09.042 "abort": true, 00:29:09.042 "nvme_admin": false, 00:29:09.042 "nvme_io": false 00:29:09.042 }, 00:29:09.042 "memory_domains": [ 00:29:09.042 { 00:29:09.042 "dma_device_id": "system", 00:29:09.042 "dma_device_type": 1 00:29:09.042 }, 00:29:09.042 { 00:29:09.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.042 "dma_device_type": 2 00:29:09.042 } 00:29:09.042 ], 00:29:09.042 "driver_specific": {} 00:29:09.042 } 00:29:09.042 ] 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:09.042 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:09.043 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:09.043 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:09.043 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:09.043 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.043 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.301 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:09.301 "name": "Existed_Raid", 00:29:09.301 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:09.301 "strip_size_kb": 0, 00:29:09.301 "state": "online", 00:29:09.301 "raid_level": "raid1", 00:29:09.301 "superblock": true, 00:29:09.301 "num_base_bdevs": 3, 00:29:09.301 "num_base_bdevs_discovered": 3, 00:29:09.301 "num_base_bdevs_operational": 3, 00:29:09.301 "base_bdevs_list": [ 00:29:09.301 { 00:29:09.301 "name": "BaseBdev1", 00:29:09.301 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:09.301 "is_configured": true, 00:29:09.301 "data_offset": 2048, 00:29:09.301 "data_size": 63488 00:29:09.301 }, 00:29:09.301 { 00:29:09.301 "name": "BaseBdev2", 00:29:09.301 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:09.301 "is_configured": true, 00:29:09.301 "data_offset": 2048, 00:29:09.301 "data_size": 63488 00:29:09.301 }, 00:29:09.301 { 00:29:09.301 "name": "BaseBdev3", 00:29:09.301 "uuid": "470af117-5bbe-4bc3-a5ec-4a8fbda902d8", 00:29:09.301 "is_configured": true, 00:29:09.301 "data_offset": 2048, 00:29:09.301 "data_size": 63488 00:29:09.301 } 00:29:09.301 ] 00:29:09.301 }' 00:29:09.301 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:09.301 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:09.867 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:29:10.125 [2024-05-15 11:22:28.576904] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:29:10.125 "name": "Existed_Raid", 00:29:10.125 "aliases": [ 00:29:10.125 "5cb7dda8-204c-439c-9b1e-949b5b4f9021" 00:29:10.125 ], 00:29:10.125 "product_name": "Raid Volume", 00:29:10.125 "block_size": 512, 00:29:10.125 "num_blocks": 63488, 00:29:10.125 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:10.125 "assigned_rate_limits": { 00:29:10.125 "rw_ios_per_sec": 0, 00:29:10.125 "rw_mbytes_per_sec": 0, 00:29:10.125 "r_mbytes_per_sec": 0, 00:29:10.125 "w_mbytes_per_sec": 0 00:29:10.125 }, 00:29:10.125 "claimed": false, 00:29:10.125 "zoned": false, 00:29:10.125 "supported_io_types": { 00:29:10.125 "read": true, 00:29:10.125 "write": true, 00:29:10.125 "unmap": false, 00:29:10.125 "write_zeroes": true, 00:29:10.125 "flush": false, 00:29:10.125 "reset": true, 00:29:10.125 "compare": false, 00:29:10.125 "compare_and_write": false, 00:29:10.125 "abort": false, 00:29:10.125 "nvme_admin": false, 00:29:10.125 "nvme_io": false 00:29:10.125 }, 00:29:10.125 "memory_domains": [ 00:29:10.125 { 00:29:10.125 "dma_device_id": "system", 00:29:10.125 "dma_device_type": 1 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.125 "dma_device_type": 2 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "dma_device_id": "system", 00:29:10.125 "dma_device_type": 1 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.125 "dma_device_type": 2 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "dma_device_id": "system", 00:29:10.125 "dma_device_type": 1 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.125 "dma_device_type": 2 00:29:10.125 } 00:29:10.125 ], 00:29:10.125 "driver_specific": { 00:29:10.125 "raid": { 00:29:10.125 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:10.125 "strip_size_kb": 0, 00:29:10.125 "state": "online", 00:29:10.125 "raid_level": "raid1", 00:29:10.125 "superblock": true, 00:29:10.125 "num_base_bdevs": 3, 00:29:10.125 "num_base_bdevs_discovered": 3, 00:29:10.125 "num_base_bdevs_operational": 3, 00:29:10.125 "base_bdevs_list": [ 00:29:10.125 { 00:29:10.125 "name": "BaseBdev1", 00:29:10.125 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:10.125 "is_configured": true, 00:29:10.125 "data_offset": 2048, 00:29:10.125 "data_size": 63488 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "name": "BaseBdev2", 00:29:10.125 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:10.125 "is_configured": true, 00:29:10.125 "data_offset": 2048, 00:29:10.125 "data_size": 63488 00:29:10.125 }, 00:29:10.125 { 00:29:10.125 "name": "BaseBdev3", 00:29:10.125 "uuid": "470af117-5bbe-4bc3-a5ec-4a8fbda902d8", 00:29:10.125 "is_configured": true, 00:29:10.125 "data_offset": 2048, 00:29:10.125 "data_size": 63488 00:29:10.125 } 00:29:10.125 ] 00:29:10.125 } 00:29:10.125 } 00:29:10.125 }' 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:29:10.125 BaseBdev2 00:29:10.125 BaseBdev3' 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:29:10.125 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:10.383 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:10.383 "name": "BaseBdev1", 00:29:10.383 "aliases": [ 00:29:10.383 "92ccf663-2b1e-4839-b14f-0402e4772c44" 00:29:10.383 ], 00:29:10.383 "product_name": "Malloc disk", 00:29:10.383 "block_size": 512, 00:29:10.383 "num_blocks": 65536, 00:29:10.383 "uuid": "92ccf663-2b1e-4839-b14f-0402e4772c44", 00:29:10.383 "assigned_rate_limits": { 00:29:10.383 "rw_ios_per_sec": 0, 00:29:10.383 "rw_mbytes_per_sec": 0, 00:29:10.383 "r_mbytes_per_sec": 0, 00:29:10.383 "w_mbytes_per_sec": 0 00:29:10.383 }, 00:29:10.383 "claimed": true, 00:29:10.383 "claim_type": "exclusive_write", 00:29:10.383 "zoned": false, 00:29:10.383 "supported_io_types": { 00:29:10.383 "read": true, 00:29:10.383 "write": true, 00:29:10.383 "unmap": true, 00:29:10.384 "write_zeroes": true, 00:29:10.384 "flush": true, 00:29:10.384 "reset": true, 00:29:10.384 "compare": false, 00:29:10.384 "compare_and_write": false, 00:29:10.384 "abort": true, 00:29:10.384 "nvme_admin": false, 00:29:10.384 "nvme_io": false 00:29:10.384 }, 00:29:10.384 "memory_domains": [ 00:29:10.384 { 00:29:10.384 "dma_device_id": "system", 00:29:10.384 "dma_device_type": 1 00:29:10.384 }, 00:29:10.384 { 00:29:10.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.384 "dma_device_type": 2 00:29:10.384 } 00:29:10.384 ], 00:29:10.384 "driver_specific": {} 00:29:10.384 }' 00:29:10.384 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:10.384 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:10.384 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:10.384 11:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:10.642 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:10.900 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:10.900 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:10.900 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:10.900 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:11.159 "name": "BaseBdev2", 00:29:11.159 "aliases": [ 00:29:11.159 "af5d7e90-abbd-446e-ba83-3e8d98fb5188" 00:29:11.159 ], 00:29:11.159 "product_name": "Malloc disk", 00:29:11.159 "block_size": 512, 00:29:11.159 "num_blocks": 65536, 00:29:11.159 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:11.159 "assigned_rate_limits": { 00:29:11.159 "rw_ios_per_sec": 0, 00:29:11.159 "rw_mbytes_per_sec": 0, 00:29:11.159 "r_mbytes_per_sec": 0, 00:29:11.159 "w_mbytes_per_sec": 0 00:29:11.159 }, 00:29:11.159 "claimed": true, 00:29:11.159 "claim_type": "exclusive_write", 00:29:11.159 "zoned": false, 00:29:11.159 "supported_io_types": { 00:29:11.159 "read": true, 00:29:11.159 "write": true, 00:29:11.159 "unmap": true, 00:29:11.159 "write_zeroes": true, 00:29:11.159 "flush": true, 00:29:11.159 "reset": true, 00:29:11.159 "compare": false, 00:29:11.159 "compare_and_write": false, 00:29:11.159 "abort": true, 00:29:11.159 "nvme_admin": false, 00:29:11.159 "nvme_io": false 00:29:11.159 }, 00:29:11.159 "memory_domains": [ 00:29:11.159 { 00:29:11.159 "dma_device_id": "system", 00:29:11.159 "dma_device_type": 1 00:29:11.159 }, 00:29:11.159 { 00:29:11.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:11.159 "dma_device_type": 2 00:29:11.159 } 00:29:11.159 ], 00:29:11.159 "driver_specific": {} 00:29:11.159 }' 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:11.159 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:11.485 11:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:11.485 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:11.485 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:11.485 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:29:11.485 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:11.750 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:11.750 "name": "BaseBdev3", 00:29:11.750 "aliases": [ 00:29:11.750 "470af117-5bbe-4bc3-a5ec-4a8fbda902d8" 00:29:11.750 ], 00:29:11.750 "product_name": "Malloc disk", 00:29:11.750 "block_size": 512, 00:29:11.750 "num_blocks": 65536, 00:29:11.750 "uuid": "470af117-5bbe-4bc3-a5ec-4a8fbda902d8", 00:29:11.750 "assigned_rate_limits": { 00:29:11.750 "rw_ios_per_sec": 0, 00:29:11.750 "rw_mbytes_per_sec": 0, 00:29:11.750 "r_mbytes_per_sec": 0, 00:29:11.750 "w_mbytes_per_sec": 0 00:29:11.750 }, 00:29:11.750 "claimed": true, 00:29:11.750 "claim_type": "exclusive_write", 00:29:11.750 "zoned": false, 00:29:11.750 "supported_io_types": { 00:29:11.750 "read": true, 00:29:11.750 "write": true, 00:29:11.750 "unmap": true, 00:29:11.750 "write_zeroes": true, 00:29:11.750 "flush": true, 00:29:11.750 "reset": true, 00:29:11.750 "compare": false, 00:29:11.750 "compare_and_write": false, 00:29:11.750 "abort": true, 00:29:11.750 "nvme_admin": false, 00:29:11.750 "nvme_io": false 00:29:11.750 }, 00:29:11.750 "memory_domains": [ 00:29:11.750 { 00:29:11.750 "dma_device_id": "system", 00:29:11.750 "dma_device_type": 1 00:29:11.750 }, 00:29:11.750 { 00:29:11.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:11.750 "dma_device_type": 2 00:29:11.750 } 00:29:11.750 ], 00:29:11.750 "driver_specific": {} 00:29:11.750 }' 00:29:11.750 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:11.750 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:11.750 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:11.750 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:12.011 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:12.269 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:12.269 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:12.269 [2024-05-15 11:22:30.857167] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.528 11:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:12.528 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:12.528 "name": "Existed_Raid", 00:29:12.528 "uuid": "5cb7dda8-204c-439c-9b1e-949b5b4f9021", 00:29:12.528 "strip_size_kb": 0, 00:29:12.528 "state": "online", 00:29:12.528 "raid_level": "raid1", 00:29:12.528 "superblock": true, 00:29:12.528 "num_base_bdevs": 3, 00:29:12.528 "num_base_bdevs_discovered": 2, 00:29:12.528 "num_base_bdevs_operational": 2, 00:29:12.528 "base_bdevs_list": [ 00:29:12.528 { 00:29:12.528 "name": null, 00:29:12.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.528 "is_configured": false, 00:29:12.528 "data_offset": 2048, 00:29:12.528 "data_size": 63488 00:29:12.528 }, 00:29:12.528 { 00:29:12.528 "name": "BaseBdev2", 00:29:12.528 "uuid": "af5d7e90-abbd-446e-ba83-3e8d98fb5188", 00:29:12.528 "is_configured": true, 00:29:12.528 "data_offset": 2048, 00:29:12.528 "data_size": 63488 00:29:12.528 }, 00:29:12.528 { 00:29:12.528 "name": "BaseBdev3", 00:29:12.528 "uuid": "470af117-5bbe-4bc3-a5ec-4a8fbda902d8", 00:29:12.528 "is_configured": true, 00:29:12.528 "data_offset": 2048, 00:29:12.528 "data_size": 63488 00:29:12.528 } 00:29:12.528 ] 00:29:12.528 }' 00:29:12.528 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:12.528 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.461 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:13.461 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:13.461 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.461 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:29:13.461 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:29:13.461 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:13.461 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:13.719 [2024-05-15 11:22:32.315551] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:13.977 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:13.977 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:13.977 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:29:13.977 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.235 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:29:14.235 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:14.235 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:29:14.235 [2024-05-15 11:22:32.808236] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:14.235 [2024-05-15 11:22:32.808331] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.549 [2024-05-15 11:22:32.891147] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.549 [2024-05-15 11:22:32.891256] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:14.549 [2024-05-15 11:22:32.891274] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:29:14.549 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:14.549 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:14.549 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.549 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:29:14.807 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:29:15.066 BaseBdev2 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:15.066 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:15.324 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:15.324 [ 00:29:15.324 { 00:29:15.324 "name": "BaseBdev2", 00:29:15.324 "aliases": [ 00:29:15.324 "79697270-8334-41e8-8826-637deb870791" 00:29:15.324 ], 00:29:15.324 "product_name": "Malloc disk", 00:29:15.324 "block_size": 512, 00:29:15.324 "num_blocks": 65536, 00:29:15.324 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:15.324 "assigned_rate_limits": { 00:29:15.325 "rw_ios_per_sec": 0, 00:29:15.325 "rw_mbytes_per_sec": 0, 00:29:15.325 "r_mbytes_per_sec": 0, 00:29:15.325 "w_mbytes_per_sec": 0 00:29:15.325 }, 00:29:15.325 "claimed": false, 00:29:15.325 "zoned": false, 00:29:15.325 "supported_io_types": { 00:29:15.325 "read": true, 00:29:15.325 "write": true, 00:29:15.325 "unmap": true, 00:29:15.325 "write_zeroes": true, 00:29:15.325 "flush": true, 00:29:15.325 "reset": true, 00:29:15.325 "compare": false, 00:29:15.325 "compare_and_write": false, 00:29:15.325 "abort": true, 00:29:15.325 "nvme_admin": false, 00:29:15.325 "nvme_io": false 00:29:15.325 }, 00:29:15.325 "memory_domains": [ 00:29:15.325 { 00:29:15.325 "dma_device_id": "system", 00:29:15.325 "dma_device_type": 1 00:29:15.325 }, 00:29:15.325 { 00:29:15.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:15.325 "dma_device_type": 2 00:29:15.325 } 00:29:15.325 ], 00:29:15.325 "driver_specific": {} 00:29:15.325 } 00:29:15.325 ] 00:29:15.325 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:15.325 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:29:15.325 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:29:15.325 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:29:15.583 BaseBdev3 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:15.583 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:15.841 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:16.099 [ 00:29:16.099 { 00:29:16.099 "name": "BaseBdev3", 00:29:16.099 "aliases": [ 00:29:16.099 "1a7809f1-a5ab-48a8-8405-bc95813a6bbd" 00:29:16.099 ], 00:29:16.099 "product_name": "Malloc disk", 00:29:16.099 "block_size": 512, 00:29:16.099 "num_blocks": 65536, 00:29:16.099 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:16.099 "assigned_rate_limits": { 00:29:16.099 "rw_ios_per_sec": 0, 00:29:16.099 "rw_mbytes_per_sec": 0, 00:29:16.099 "r_mbytes_per_sec": 0, 00:29:16.099 "w_mbytes_per_sec": 0 00:29:16.099 }, 00:29:16.099 "claimed": false, 00:29:16.099 "zoned": false, 00:29:16.099 "supported_io_types": { 00:29:16.099 "read": true, 00:29:16.099 "write": true, 00:29:16.099 "unmap": true, 00:29:16.099 "write_zeroes": true, 00:29:16.099 "flush": true, 00:29:16.099 "reset": true, 00:29:16.099 "compare": false, 00:29:16.099 "compare_and_write": false, 00:29:16.099 "abort": true, 00:29:16.099 "nvme_admin": false, 00:29:16.099 "nvme_io": false 00:29:16.099 }, 00:29:16.099 "memory_domains": [ 00:29:16.099 { 00:29:16.099 "dma_device_id": "system", 00:29:16.099 "dma_device_type": 1 00:29:16.099 }, 00:29:16.099 { 00:29:16.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:16.099 "dma_device_type": 2 00:29:16.099 } 00:29:16.099 ], 00:29:16.099 "driver_specific": {} 00:29:16.099 } 00:29:16.099 ] 00:29:16.099 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:16.099 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:29:16.099 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:29:16.099 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:29:16.358 [2024-05-15 11:22:34.821182] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:16.358 [2024-05-15 11:22:34.821306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:16.358 [2024-05-15 11:22:34.821353] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.358 [2024-05-15 11:22:34.823103] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:16.358 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.359 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:16.617 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:16.617 "name": "Existed_Raid", 00:29:16.617 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:16.617 "strip_size_kb": 0, 00:29:16.617 "state": "configuring", 00:29:16.617 "raid_level": "raid1", 00:29:16.617 "superblock": true, 00:29:16.617 "num_base_bdevs": 3, 00:29:16.617 "num_base_bdevs_discovered": 2, 00:29:16.617 "num_base_bdevs_operational": 3, 00:29:16.617 "base_bdevs_list": [ 00:29:16.617 { 00:29:16.617 "name": "BaseBdev1", 00:29:16.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.617 "is_configured": false, 00:29:16.617 "data_offset": 0, 00:29:16.617 "data_size": 0 00:29:16.617 }, 00:29:16.617 { 00:29:16.617 "name": "BaseBdev2", 00:29:16.617 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:16.617 "is_configured": true, 00:29:16.617 "data_offset": 2048, 00:29:16.617 "data_size": 63488 00:29:16.617 }, 00:29:16.617 { 00:29:16.617 "name": "BaseBdev3", 00:29:16.617 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:16.617 "is_configured": true, 00:29:16.617 "data_offset": 2048, 00:29:16.617 "data_size": 63488 00:29:16.617 } 00:29:16.617 ] 00:29:16.617 }' 00:29:16.617 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:16.617 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:17.184 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:17.442 [2024-05-15 11:22:35.985285] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:17.442 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.443 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.701 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:17.701 "name": "Existed_Raid", 00:29:17.701 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:17.701 "strip_size_kb": 0, 00:29:17.701 "state": "configuring", 00:29:17.701 "raid_level": "raid1", 00:29:17.701 "superblock": true, 00:29:17.701 "num_base_bdevs": 3, 00:29:17.701 "num_base_bdevs_discovered": 1, 00:29:17.701 "num_base_bdevs_operational": 3, 00:29:17.701 "base_bdevs_list": [ 00:29:17.701 { 00:29:17.701 "name": "BaseBdev1", 00:29:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.701 "is_configured": false, 00:29:17.701 "data_offset": 0, 00:29:17.701 "data_size": 0 00:29:17.701 }, 00:29:17.701 { 00:29:17.701 "name": null, 00:29:17.701 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:17.701 "is_configured": false, 00:29:17.701 "data_offset": 2048, 00:29:17.701 "data_size": 63488 00:29:17.701 }, 00:29:17.701 { 00:29:17.701 "name": "BaseBdev3", 00:29:17.701 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:17.701 "is_configured": true, 00:29:17.701 "data_offset": 2048, 00:29:17.701 "data_size": 63488 00:29:17.701 } 00:29:17.701 ] 00:29:17.701 }' 00:29:17.701 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:17.701 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.269 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.269 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:18.837 [2024-05-15 11:22:37.425116] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:18.837 BaseBdev1 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:18.837 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:19.095 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:19.354 [ 00:29:19.354 { 00:29:19.354 "name": "BaseBdev1", 00:29:19.354 "aliases": [ 00:29:19.354 "19b60d57-ae89-4782-a4d8-2b46da68384c" 00:29:19.354 ], 00:29:19.354 "product_name": "Malloc disk", 00:29:19.354 "block_size": 512, 00:29:19.354 "num_blocks": 65536, 00:29:19.354 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:19.354 "assigned_rate_limits": { 00:29:19.354 "rw_ios_per_sec": 0, 00:29:19.354 "rw_mbytes_per_sec": 0, 00:29:19.354 "r_mbytes_per_sec": 0, 00:29:19.354 "w_mbytes_per_sec": 0 00:29:19.354 }, 00:29:19.354 "claimed": true, 00:29:19.354 "claim_type": "exclusive_write", 00:29:19.354 "zoned": false, 00:29:19.354 "supported_io_types": { 00:29:19.354 "read": true, 00:29:19.354 "write": true, 00:29:19.354 "unmap": true, 00:29:19.354 "write_zeroes": true, 00:29:19.354 "flush": true, 00:29:19.354 "reset": true, 00:29:19.354 "compare": false, 00:29:19.354 "compare_and_write": false, 00:29:19.354 "abort": true, 00:29:19.354 "nvme_admin": false, 00:29:19.354 "nvme_io": false 00:29:19.354 }, 00:29:19.354 "memory_domains": [ 00:29:19.354 { 00:29:19.354 "dma_device_id": "system", 00:29:19.354 "dma_device_type": 1 00:29:19.354 }, 00:29:19.354 { 00:29:19.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.354 "dma_device_type": 2 00:29:19.354 } 00:29:19.354 ], 00:29:19.354 "driver_specific": {} 00:29:19.354 } 00:29:19.354 ] 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:19.354 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.612 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:19.612 "name": "Existed_Raid", 00:29:19.612 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:19.612 "strip_size_kb": 0, 00:29:19.612 "state": "configuring", 00:29:19.612 "raid_level": "raid1", 00:29:19.612 "superblock": true, 00:29:19.612 "num_base_bdevs": 3, 00:29:19.612 "num_base_bdevs_discovered": 2, 00:29:19.612 "num_base_bdevs_operational": 3, 00:29:19.612 "base_bdevs_list": [ 00:29:19.612 { 00:29:19.612 "name": "BaseBdev1", 00:29:19.612 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:19.612 "is_configured": true, 00:29:19.612 "data_offset": 2048, 00:29:19.612 "data_size": 63488 00:29:19.612 }, 00:29:19.612 { 00:29:19.612 "name": null, 00:29:19.612 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:19.612 "is_configured": false, 00:29:19.612 "data_offset": 2048, 00:29:19.612 "data_size": 63488 00:29:19.612 }, 00:29:19.612 { 00:29:19.612 "name": "BaseBdev3", 00:29:19.612 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:19.612 "is_configured": true, 00:29:19.612 "data_offset": 2048, 00:29:19.612 "data_size": 63488 00:29:19.612 } 00:29:19.612 ] 00:29:19.612 }' 00:29:19.612 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:19.612 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:20.179 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.179 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:20.437 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:20.437 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:29:20.695 [2024-05-15 11:22:39.237642] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.695 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:20.954 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:20.954 "name": "Existed_Raid", 00:29:20.954 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:20.954 "strip_size_kb": 0, 00:29:20.954 "state": "configuring", 00:29:20.954 "raid_level": "raid1", 00:29:20.954 "superblock": true, 00:29:20.954 "num_base_bdevs": 3, 00:29:20.954 "num_base_bdevs_discovered": 1, 00:29:20.954 "num_base_bdevs_operational": 3, 00:29:20.954 "base_bdevs_list": [ 00:29:20.954 { 00:29:20.954 "name": "BaseBdev1", 00:29:20.954 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:20.954 "is_configured": true, 00:29:20.954 "data_offset": 2048, 00:29:20.954 "data_size": 63488 00:29:20.954 }, 00:29:20.954 { 00:29:20.954 "name": null, 00:29:20.954 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:20.954 "is_configured": false, 00:29:20.954 "data_offset": 2048, 00:29:20.954 "data_size": 63488 00:29:20.954 }, 00:29:20.954 { 00:29:20.954 "name": null, 00:29:20.954 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:20.954 "is_configured": false, 00:29:20.954 "data_offset": 2048, 00:29:20.954 "data_size": 63488 00:29:20.954 } 00:29:20.954 ] 00:29:20.954 }' 00:29:20.954 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:20.954 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.889 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:21.889 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.889 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:29:21.889 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:22.149 [2024-05-15 11:22:40.581861] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.149 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.408 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:22.408 "name": "Existed_Raid", 00:29:22.408 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:22.408 "strip_size_kb": 0, 00:29:22.408 "state": "configuring", 00:29:22.408 "raid_level": "raid1", 00:29:22.408 "superblock": true, 00:29:22.408 "num_base_bdevs": 3, 00:29:22.408 "num_base_bdevs_discovered": 2, 00:29:22.408 "num_base_bdevs_operational": 3, 00:29:22.408 "base_bdevs_list": [ 00:29:22.408 { 00:29:22.408 "name": "BaseBdev1", 00:29:22.408 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:22.408 "is_configured": true, 00:29:22.408 "data_offset": 2048, 00:29:22.408 "data_size": 63488 00:29:22.408 }, 00:29:22.408 { 00:29:22.408 "name": null, 00:29:22.408 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:22.408 "is_configured": false, 00:29:22.408 "data_offset": 2048, 00:29:22.408 "data_size": 63488 00:29:22.408 }, 00:29:22.408 { 00:29:22.408 "name": "BaseBdev3", 00:29:22.408 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:22.408 "is_configured": true, 00:29:22.408 "data_offset": 2048, 00:29:22.408 "data_size": 63488 00:29:22.408 } 00:29:22.408 ] 00:29:22.408 }' 00:29:22.408 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:22.408 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.975 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.975 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:23.233 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:29:23.233 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:23.493 [2024-05-15 11:22:41.938164] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.493 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.767 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:23.767 "name": "Existed_Raid", 00:29:23.767 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:23.767 "strip_size_kb": 0, 00:29:23.767 "state": "configuring", 00:29:23.767 "raid_level": "raid1", 00:29:23.767 "superblock": true, 00:29:23.767 "num_base_bdevs": 3, 00:29:23.767 "num_base_bdevs_discovered": 1, 00:29:23.767 "num_base_bdevs_operational": 3, 00:29:23.767 "base_bdevs_list": [ 00:29:23.767 { 00:29:23.767 "name": null, 00:29:23.767 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:23.767 "is_configured": false, 00:29:23.767 "data_offset": 2048, 00:29:23.767 "data_size": 63488 00:29:23.767 }, 00:29:23.767 { 00:29:23.767 "name": null, 00:29:23.767 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:23.767 "is_configured": false, 00:29:23.767 "data_offset": 2048, 00:29:23.767 "data_size": 63488 00:29:23.767 }, 00:29:23.767 { 00:29:23.767 "name": "BaseBdev3", 00:29:23.767 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:23.767 "is_configured": true, 00:29:23.767 "data_offset": 2048, 00:29:23.767 "data_size": 63488 00:29:23.767 } 00:29:23.767 ] 00:29:23.767 }' 00:29:23.767 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:23.767 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.335 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.335 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:24.593 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:29:24.593 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:24.851 [2024-05-15 11:22:43.377683] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.851 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:25.109 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:25.109 "name": "Existed_Raid", 00:29:25.109 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:25.109 "strip_size_kb": 0, 00:29:25.109 "state": "configuring", 00:29:25.109 "raid_level": "raid1", 00:29:25.109 "superblock": true, 00:29:25.109 "num_base_bdevs": 3, 00:29:25.109 "num_base_bdevs_discovered": 2, 00:29:25.109 "num_base_bdevs_operational": 3, 00:29:25.109 "base_bdevs_list": [ 00:29:25.109 { 00:29:25.109 "name": null, 00:29:25.109 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:25.109 "is_configured": false, 00:29:25.109 "data_offset": 2048, 00:29:25.109 "data_size": 63488 00:29:25.109 }, 00:29:25.109 { 00:29:25.109 "name": "BaseBdev2", 00:29:25.109 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:25.109 "is_configured": true, 00:29:25.109 "data_offset": 2048, 00:29:25.109 "data_size": 63488 00:29:25.109 }, 00:29:25.109 { 00:29:25.109 "name": "BaseBdev3", 00:29:25.109 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:25.109 "is_configured": true, 00:29:25.109 "data_offset": 2048, 00:29:25.109 "data_size": 63488 00:29:25.109 } 00:29:25.109 ] 00:29:25.109 }' 00:29:25.109 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:25.109 11:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.042 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.042 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:26.042 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:29:26.042 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.042 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:26.300 11:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 19b60d57-ae89-4782-a4d8-2b46da68384c 00:29:26.558 [2024-05-15 11:22:45.057267] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:26.558 [2024-05-15 11:22:45.057453] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:29:26.558 [2024-05-15 11:22:45.057470] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:26.558 [2024-05-15 11:22:45.057559] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:29:26.558 [2024-05-15 11:22:45.057791] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:29:26.558 NewBaseBdev 00:29:26.558 [2024-05-15 11:22:45.058062] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:29:26.558 [2024-05-15 11:22:45.058184] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:26.558 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:26.838 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:27.096 [ 00:29:27.096 { 00:29:27.096 "name": "NewBaseBdev", 00:29:27.096 "aliases": [ 00:29:27.096 "19b60d57-ae89-4782-a4d8-2b46da68384c" 00:29:27.096 ], 00:29:27.096 "product_name": "Malloc disk", 00:29:27.096 "block_size": 512, 00:29:27.096 "num_blocks": 65536, 00:29:27.096 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:27.096 "assigned_rate_limits": { 00:29:27.096 "rw_ios_per_sec": 0, 00:29:27.096 "rw_mbytes_per_sec": 0, 00:29:27.096 "r_mbytes_per_sec": 0, 00:29:27.096 "w_mbytes_per_sec": 0 00:29:27.096 }, 00:29:27.096 "claimed": true, 00:29:27.096 "claim_type": "exclusive_write", 00:29:27.096 "zoned": false, 00:29:27.096 "supported_io_types": { 00:29:27.096 "read": true, 00:29:27.096 "write": true, 00:29:27.096 "unmap": true, 00:29:27.096 "write_zeroes": true, 00:29:27.096 "flush": true, 00:29:27.096 "reset": true, 00:29:27.096 "compare": false, 00:29:27.096 "compare_and_write": false, 00:29:27.096 "abort": true, 00:29:27.096 "nvme_admin": false, 00:29:27.096 "nvme_io": false 00:29:27.096 }, 00:29:27.096 "memory_domains": [ 00:29:27.096 { 00:29:27.096 "dma_device_id": "system", 00:29:27.096 "dma_device_type": 1 00:29:27.096 }, 00:29:27.096 { 00:29:27.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:27.096 "dma_device_type": 2 00:29:27.096 } 00:29:27.096 ], 00:29:27.096 "driver_specific": {} 00:29:27.096 } 00:29:27.096 ] 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.096 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.355 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:27.355 "name": "Existed_Raid", 00:29:27.355 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:27.355 "strip_size_kb": 0, 00:29:27.355 "state": "online", 00:29:27.355 "raid_level": "raid1", 00:29:27.355 "superblock": true, 00:29:27.355 "num_base_bdevs": 3, 00:29:27.355 "num_base_bdevs_discovered": 3, 00:29:27.355 "num_base_bdevs_operational": 3, 00:29:27.355 "base_bdevs_list": [ 00:29:27.355 { 00:29:27.355 "name": "NewBaseBdev", 00:29:27.355 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:27.355 "is_configured": true, 00:29:27.355 "data_offset": 2048, 00:29:27.355 "data_size": 63488 00:29:27.355 }, 00:29:27.355 { 00:29:27.355 "name": "BaseBdev2", 00:29:27.355 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:27.355 "is_configured": true, 00:29:27.355 "data_offset": 2048, 00:29:27.355 "data_size": 63488 00:29:27.355 }, 00:29:27.355 { 00:29:27.355 "name": "BaseBdev3", 00:29:27.355 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:27.355 "is_configured": true, 00:29:27.355 "data_offset": 2048, 00:29:27.355 "data_size": 63488 00:29:27.355 } 00:29:27.355 ] 00:29:27.355 }' 00:29:27.355 11:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:27.355 11:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:27.921 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:29:28.179 [2024-05-15 11:22:46.697707] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:29:28.179 "name": "Existed_Raid", 00:29:28.179 "aliases": [ 00:29:28.179 "814adee8-6019-4d24-b494-1bae807381d2" 00:29:28.179 ], 00:29:28.179 "product_name": "Raid Volume", 00:29:28.179 "block_size": 512, 00:29:28.179 "num_blocks": 63488, 00:29:28.179 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:28.179 "assigned_rate_limits": { 00:29:28.179 "rw_ios_per_sec": 0, 00:29:28.179 "rw_mbytes_per_sec": 0, 00:29:28.179 "r_mbytes_per_sec": 0, 00:29:28.179 "w_mbytes_per_sec": 0 00:29:28.179 }, 00:29:28.179 "claimed": false, 00:29:28.179 "zoned": false, 00:29:28.179 "supported_io_types": { 00:29:28.179 "read": true, 00:29:28.179 "write": true, 00:29:28.179 "unmap": false, 00:29:28.179 "write_zeroes": true, 00:29:28.179 "flush": false, 00:29:28.179 "reset": true, 00:29:28.179 "compare": false, 00:29:28.179 "compare_and_write": false, 00:29:28.179 "abort": false, 00:29:28.179 "nvme_admin": false, 00:29:28.179 "nvme_io": false 00:29:28.179 }, 00:29:28.179 "memory_domains": [ 00:29:28.179 { 00:29:28.179 "dma_device_id": "system", 00:29:28.179 "dma_device_type": 1 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.179 "dma_device_type": 2 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "dma_device_id": "system", 00:29:28.179 "dma_device_type": 1 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.179 "dma_device_type": 2 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "dma_device_id": "system", 00:29:28.179 "dma_device_type": 1 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.179 "dma_device_type": 2 00:29:28.179 } 00:29:28.179 ], 00:29:28.179 "driver_specific": { 00:29:28.179 "raid": { 00:29:28.179 "uuid": "814adee8-6019-4d24-b494-1bae807381d2", 00:29:28.179 "strip_size_kb": 0, 00:29:28.179 "state": "online", 00:29:28.179 "raid_level": "raid1", 00:29:28.179 "superblock": true, 00:29:28.179 "num_base_bdevs": 3, 00:29:28.179 "num_base_bdevs_discovered": 3, 00:29:28.179 "num_base_bdevs_operational": 3, 00:29:28.179 "base_bdevs_list": [ 00:29:28.179 { 00:29:28.179 "name": "NewBaseBdev", 00:29:28.179 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:28.179 "is_configured": true, 00:29:28.179 "data_offset": 2048, 00:29:28.179 "data_size": 63488 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "name": "BaseBdev2", 00:29:28.179 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:28.179 "is_configured": true, 00:29:28.179 "data_offset": 2048, 00:29:28.179 "data_size": 63488 00:29:28.179 }, 00:29:28.179 { 00:29:28.179 "name": "BaseBdev3", 00:29:28.179 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:28.179 "is_configured": true, 00:29:28.179 "data_offset": 2048, 00:29:28.179 "data_size": 63488 00:29:28.179 } 00:29:28.179 ] 00:29:28.179 } 00:29:28.179 } 00:29:28.179 }' 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:29:28.179 BaseBdev2 00:29:28.179 BaseBdev3' 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:29:28.179 11:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:28.449 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:28.449 "name": "NewBaseBdev", 00:29:28.449 "aliases": [ 00:29:28.449 "19b60d57-ae89-4782-a4d8-2b46da68384c" 00:29:28.449 ], 00:29:28.449 "product_name": "Malloc disk", 00:29:28.449 "block_size": 512, 00:29:28.449 "num_blocks": 65536, 00:29:28.449 "uuid": "19b60d57-ae89-4782-a4d8-2b46da68384c", 00:29:28.449 "assigned_rate_limits": { 00:29:28.449 "rw_ios_per_sec": 0, 00:29:28.449 "rw_mbytes_per_sec": 0, 00:29:28.449 "r_mbytes_per_sec": 0, 00:29:28.449 "w_mbytes_per_sec": 0 00:29:28.449 }, 00:29:28.449 "claimed": true, 00:29:28.449 "claim_type": "exclusive_write", 00:29:28.449 "zoned": false, 00:29:28.449 "supported_io_types": { 00:29:28.449 "read": true, 00:29:28.449 "write": true, 00:29:28.449 "unmap": true, 00:29:28.449 "write_zeroes": true, 00:29:28.449 "flush": true, 00:29:28.449 "reset": true, 00:29:28.449 "compare": false, 00:29:28.449 "compare_and_write": false, 00:29:28.449 "abort": true, 00:29:28.449 "nvme_admin": false, 00:29:28.449 "nvme_io": false 00:29:28.449 }, 00:29:28.449 "memory_domains": [ 00:29:28.449 { 00:29:28.449 "dma_device_id": "system", 00:29:28.449 "dma_device_type": 1 00:29:28.449 }, 00:29:28.449 { 00:29:28.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.449 "dma_device_type": 2 00:29:28.449 } 00:29:28.449 ], 00:29:28.449 "driver_specific": {} 00:29:28.449 }' 00:29:28.449 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:28.449 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:28.710 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:28.967 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:28.967 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:28.967 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:28.967 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:28.967 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:29.225 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:29.225 "name": "BaseBdev2", 00:29:29.225 "aliases": [ 00:29:29.225 "79697270-8334-41e8-8826-637deb870791" 00:29:29.225 ], 00:29:29.225 "product_name": "Malloc disk", 00:29:29.225 "block_size": 512, 00:29:29.225 "num_blocks": 65536, 00:29:29.225 "uuid": "79697270-8334-41e8-8826-637deb870791", 00:29:29.225 "assigned_rate_limits": { 00:29:29.225 "rw_ios_per_sec": 0, 00:29:29.225 "rw_mbytes_per_sec": 0, 00:29:29.225 "r_mbytes_per_sec": 0, 00:29:29.225 "w_mbytes_per_sec": 0 00:29:29.225 }, 00:29:29.225 "claimed": true, 00:29:29.225 "claim_type": "exclusive_write", 00:29:29.225 "zoned": false, 00:29:29.225 "supported_io_types": { 00:29:29.225 "read": true, 00:29:29.225 "write": true, 00:29:29.225 "unmap": true, 00:29:29.225 "write_zeroes": true, 00:29:29.225 "flush": true, 00:29:29.225 "reset": true, 00:29:29.225 "compare": false, 00:29:29.225 "compare_and_write": false, 00:29:29.225 "abort": true, 00:29:29.225 "nvme_admin": false, 00:29:29.225 "nvme_io": false 00:29:29.225 }, 00:29:29.225 "memory_domains": [ 00:29:29.225 { 00:29:29.225 "dma_device_id": "system", 00:29:29.225 "dma_device_type": 1 00:29:29.225 }, 00:29:29.225 { 00:29:29.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.225 "dma_device_type": 2 00:29:29.225 } 00:29:29.225 ], 00:29:29.225 "driver_specific": {} 00:29:29.225 }' 00:29:29.225 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:29.225 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:29.225 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:29.225 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:29.483 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:29.483 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:29.483 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:29.483 11:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:29.483 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:29.483 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:29.483 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:29.740 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:29.740 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:29.740 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:29:29.740 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:29.998 "name": "BaseBdev3", 00:29:29.998 "aliases": [ 00:29:29.998 "1a7809f1-a5ab-48a8-8405-bc95813a6bbd" 00:29:29.998 ], 00:29:29.998 "product_name": "Malloc disk", 00:29:29.998 "block_size": 512, 00:29:29.998 "num_blocks": 65536, 00:29:29.998 "uuid": "1a7809f1-a5ab-48a8-8405-bc95813a6bbd", 00:29:29.998 "assigned_rate_limits": { 00:29:29.998 "rw_ios_per_sec": 0, 00:29:29.998 "rw_mbytes_per_sec": 0, 00:29:29.998 "r_mbytes_per_sec": 0, 00:29:29.998 "w_mbytes_per_sec": 0 00:29:29.998 }, 00:29:29.998 "claimed": true, 00:29:29.998 "claim_type": "exclusive_write", 00:29:29.998 "zoned": false, 00:29:29.998 "supported_io_types": { 00:29:29.998 "read": true, 00:29:29.998 "write": true, 00:29:29.998 "unmap": true, 00:29:29.998 "write_zeroes": true, 00:29:29.998 "flush": true, 00:29:29.998 "reset": true, 00:29:29.998 "compare": false, 00:29:29.998 "compare_and_write": false, 00:29:29.998 "abort": true, 00:29:29.998 "nvme_admin": false, 00:29:29.998 "nvme_io": false 00:29:29.998 }, 00:29:29.998 "memory_domains": [ 00:29:29.998 { 00:29:29.998 "dma_device_id": "system", 00:29:29.998 "dma_device_type": 1 00:29:29.998 }, 00:29:29.998 { 00:29:29.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.998 "dma_device_type": 2 00:29:29.998 } 00:29:29.998 ], 00:29:29.998 "driver_specific": {} 00:29:29.998 }' 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:29.998 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:30.256 11:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:30.514 [2024-05-15 11:22:49.049795] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:30.514 [2024-05-15 11:22:49.049851] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:30.514 [2024-05-15 11:22:49.049917] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:30.514 [2024-05-15 11:22:49.050105] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:30.514 [2024-05-15 11:22:49.050119] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 62680 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 62680 ']' 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 62680 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62680 00:29:30.514 killing process with pid 62680 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62680' 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 62680 00:29:30.514 11:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 62680 00:29:30.515 [2024-05-15 11:22:49.088882] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:30.773 [2024-05-15 11:22:49.344959] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:32.145 11:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:29:32.145 00:29:32.145 real 0m31.090s 00:29:32.145 user 0m58.632s 00:29:32.145 sys 0m3.068s 00:29:32.145 11:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:32.145 11:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.145 ************************************ 00:29:32.145 END TEST raid_state_function_test_sb 00:29:32.145 ************************************ 00:29:32.145 11:22:50 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:29:32.145 11:22:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:29:32.145 11:22:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:32.145 11:22:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:32.145 ************************************ 00:29:32.145 START TEST raid_superblock_test 00:29:32.145 ************************************ 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63681 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63681 /var/tmp/spdk-raid.sock 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 63681 ']' 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:32.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.145 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.145 [2024-05-15 11:22:50.779795] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:29:32.145 [2024-05-15 11:22:50.779980] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63681 ] 00:29:32.403 [2024-05-15 11:22:50.931252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.660 [2024-05-15 11:22:51.154505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.918 [2024-05-15 11:22:51.360334] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.177 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:29:33.435 malloc1 00:29:33.435 11:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:33.435 [2024-05-15 11:22:52.025660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:33.435 [2024-05-15 11:22:52.025766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.435 [2024-05-15 11:22:52.026057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:29:33.435 [2024-05-15 11:22:52.026119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.435 pt1 00:29:33.435 [2024-05-15 11:22:52.027922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.435 [2024-05-15 11:22:52.027994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.435 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:29:33.693 malloc2 00:29:33.693 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:33.952 [2024-05-15 11:22:52.461925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:33.952 [2024-05-15 11:22:52.462016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.952 [2024-05-15 11:22:52.462070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:29:33.952 [2024-05-15 11:22:52.462115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.952 [2024-05-15 11:22:52.463885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.952 [2024-05-15 11:22:52.463939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:33.952 pt2 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.952 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:29:34.211 malloc3 00:29:34.211 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:34.470 [2024-05-15 11:22:52.975268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:34.470 [2024-05-15 11:22:52.975360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.470 [2024-05-15 11:22:52.975411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:29:34.470 [2024-05-15 11:22:52.975455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.470 [2024-05-15 11:22:52.977275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.470 [2024-05-15 11:22:52.977325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:34.470 pt3 00:29:34.470 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:34.470 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:34.470 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:29:34.728 [2024-05-15 11:22:53.211418] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:34.728 [2024-05-15 11:22:53.213158] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:34.728 [2024-05-15 11:22:53.213229] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:34.728 [2024-05-15 11:22:53.213369] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:29:34.728 [2024-05-15 11:22:53.213387] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:34.728 [2024-05-15 11:22:53.213517] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:29:34.728 [2024-05-15 11:22:53.213802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:29:34.728 [2024-05-15 11:22:53.213819] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:29:34.728 [2024-05-15 11:22:53.213975] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.728 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.985 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:34.985 "name": "raid_bdev1", 00:29:34.985 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:34.985 "strip_size_kb": 0, 00:29:34.985 "state": "online", 00:29:34.985 "raid_level": "raid1", 00:29:34.985 "superblock": true, 00:29:34.985 "num_base_bdevs": 3, 00:29:34.985 "num_base_bdevs_discovered": 3, 00:29:34.985 "num_base_bdevs_operational": 3, 00:29:34.985 "base_bdevs_list": [ 00:29:34.985 { 00:29:34.985 "name": "pt1", 00:29:34.985 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:34.985 "is_configured": true, 00:29:34.985 "data_offset": 2048, 00:29:34.985 "data_size": 63488 00:29:34.985 }, 00:29:34.985 { 00:29:34.985 "name": "pt2", 00:29:34.985 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:34.985 "is_configured": true, 00:29:34.985 "data_offset": 2048, 00:29:34.985 "data_size": 63488 00:29:34.985 }, 00:29:34.985 { 00:29:34.985 "name": "pt3", 00:29:34.985 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:34.985 "is_configured": true, 00:29:34.985 "data_offset": 2048, 00:29:34.985 "data_size": 63488 00:29:34.985 } 00:29:34.985 ] 00:29:34.985 }' 00:29:34.985 11:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:34.985 11:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:35.560 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:29:35.819 [2024-05-15 11:22:54.395701] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.819 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:29:35.819 "name": "raid_bdev1", 00:29:35.819 "aliases": [ 00:29:35.819 "f0ef0881-3fb0-481e-8816-8b6923abbf34" 00:29:35.820 ], 00:29:35.820 "product_name": "Raid Volume", 00:29:35.820 "block_size": 512, 00:29:35.820 "num_blocks": 63488, 00:29:35.820 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:35.820 "assigned_rate_limits": { 00:29:35.820 "rw_ios_per_sec": 0, 00:29:35.820 "rw_mbytes_per_sec": 0, 00:29:35.820 "r_mbytes_per_sec": 0, 00:29:35.820 "w_mbytes_per_sec": 0 00:29:35.820 }, 00:29:35.820 "claimed": false, 00:29:35.820 "zoned": false, 00:29:35.820 "supported_io_types": { 00:29:35.820 "read": true, 00:29:35.820 "write": true, 00:29:35.820 "unmap": false, 00:29:35.820 "write_zeroes": true, 00:29:35.820 "flush": false, 00:29:35.820 "reset": true, 00:29:35.820 "compare": false, 00:29:35.820 "compare_and_write": false, 00:29:35.820 "abort": false, 00:29:35.820 "nvme_admin": false, 00:29:35.820 "nvme_io": false 00:29:35.820 }, 00:29:35.820 "memory_domains": [ 00:29:35.820 { 00:29:35.820 "dma_device_id": "system", 00:29:35.820 "dma_device_type": 1 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:35.820 "dma_device_type": 2 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "dma_device_id": "system", 00:29:35.820 "dma_device_type": 1 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:35.820 "dma_device_type": 2 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "dma_device_id": "system", 00:29:35.820 "dma_device_type": 1 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:35.820 "dma_device_type": 2 00:29:35.820 } 00:29:35.820 ], 00:29:35.820 "driver_specific": { 00:29:35.820 "raid": { 00:29:35.820 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:35.820 "strip_size_kb": 0, 00:29:35.820 "state": "online", 00:29:35.820 "raid_level": "raid1", 00:29:35.820 "superblock": true, 00:29:35.820 "num_base_bdevs": 3, 00:29:35.820 "num_base_bdevs_discovered": 3, 00:29:35.820 "num_base_bdevs_operational": 3, 00:29:35.820 "base_bdevs_list": [ 00:29:35.820 { 00:29:35.820 "name": "pt1", 00:29:35.820 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:35.820 "is_configured": true, 00:29:35.820 "data_offset": 2048, 00:29:35.820 "data_size": 63488 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "name": "pt2", 00:29:35.820 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:35.820 "is_configured": true, 00:29:35.820 "data_offset": 2048, 00:29:35.820 "data_size": 63488 00:29:35.820 }, 00:29:35.820 { 00:29:35.820 "name": "pt3", 00:29:35.820 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:35.820 "is_configured": true, 00:29:35.820 "data_offset": 2048, 00:29:35.820 "data_size": 63488 00:29:35.820 } 00:29:35.820 ] 00:29:35.820 } 00:29:35.820 } 00:29:35.820 }' 00:29:35.820 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:36.079 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:29:36.079 pt2 00:29:36.079 pt3' 00:29:36.079 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:36.079 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:36.079 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:36.337 "name": "pt1", 00:29:36.337 "aliases": [ 00:29:36.337 "0f9cfc0a-f79a-5fcf-90ba-ad3531370748" 00:29:36.337 ], 00:29:36.337 "product_name": "passthru", 00:29:36.337 "block_size": 512, 00:29:36.337 "num_blocks": 65536, 00:29:36.337 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:36.337 "assigned_rate_limits": { 00:29:36.337 "rw_ios_per_sec": 0, 00:29:36.337 "rw_mbytes_per_sec": 0, 00:29:36.337 "r_mbytes_per_sec": 0, 00:29:36.337 "w_mbytes_per_sec": 0 00:29:36.337 }, 00:29:36.337 "claimed": true, 00:29:36.337 "claim_type": "exclusive_write", 00:29:36.337 "zoned": false, 00:29:36.337 "supported_io_types": { 00:29:36.337 "read": true, 00:29:36.337 "write": true, 00:29:36.337 "unmap": true, 00:29:36.337 "write_zeroes": true, 00:29:36.337 "flush": true, 00:29:36.337 "reset": true, 00:29:36.337 "compare": false, 00:29:36.337 "compare_and_write": false, 00:29:36.337 "abort": true, 00:29:36.337 "nvme_admin": false, 00:29:36.337 "nvme_io": false 00:29:36.337 }, 00:29:36.337 "memory_domains": [ 00:29:36.337 { 00:29:36.337 "dma_device_id": "system", 00:29:36.337 "dma_device_type": 1 00:29:36.337 }, 00:29:36.337 { 00:29:36.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.337 "dma_device_type": 2 00:29:36.337 } 00:29:36.337 ], 00:29:36.337 "driver_specific": { 00:29:36.337 "passthru": { 00:29:36.337 "name": "pt1", 00:29:36.337 "base_bdev_name": "malloc1" 00:29:36.337 } 00:29:36.337 } 00:29:36.337 }' 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:36.337 11:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:36.596 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:36.854 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:36.854 "name": "pt2", 00:29:36.854 "aliases": [ 00:29:36.854 "c14adf7d-a7f7-588e-ad48-204700890c02" 00:29:36.854 ], 00:29:36.854 "product_name": "passthru", 00:29:36.854 "block_size": 512, 00:29:36.854 "num_blocks": 65536, 00:29:36.854 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:36.854 "assigned_rate_limits": { 00:29:36.854 "rw_ios_per_sec": 0, 00:29:36.854 "rw_mbytes_per_sec": 0, 00:29:36.854 "r_mbytes_per_sec": 0, 00:29:36.854 "w_mbytes_per_sec": 0 00:29:36.854 }, 00:29:36.854 "claimed": true, 00:29:36.854 "claim_type": "exclusive_write", 00:29:36.854 "zoned": false, 00:29:36.854 "supported_io_types": { 00:29:36.854 "read": true, 00:29:36.854 "write": true, 00:29:36.854 "unmap": true, 00:29:36.855 "write_zeroes": true, 00:29:36.855 "flush": true, 00:29:36.855 "reset": true, 00:29:36.855 "compare": false, 00:29:36.855 "compare_and_write": false, 00:29:36.855 "abort": true, 00:29:36.855 "nvme_admin": false, 00:29:36.855 "nvme_io": false 00:29:36.855 }, 00:29:36.855 "memory_domains": [ 00:29:36.855 { 00:29:36.855 "dma_device_id": "system", 00:29:36.855 "dma_device_type": 1 00:29:36.855 }, 00:29:36.855 { 00:29:36.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.855 "dma_device_type": 2 00:29:36.855 } 00:29:36.855 ], 00:29:36.855 "driver_specific": { 00:29:36.855 "passthru": { 00:29:36.855 "name": "pt2", 00:29:36.855 "base_bdev_name": "malloc2" 00:29:36.855 } 00:29:36.855 } 00:29:36.855 }' 00:29:36.855 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:36.855 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:37.113 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:37.372 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:37.372 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:37.372 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:37.372 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:37.372 11:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:37.631 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:37.631 "name": "pt3", 00:29:37.631 "aliases": [ 00:29:37.631 "22f776ce-31e9-5abf-854b-c5ecae3a6b02" 00:29:37.631 ], 00:29:37.631 "product_name": "passthru", 00:29:37.631 "block_size": 512, 00:29:37.631 "num_blocks": 65536, 00:29:37.631 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:37.631 "assigned_rate_limits": { 00:29:37.631 "rw_ios_per_sec": 0, 00:29:37.631 "rw_mbytes_per_sec": 0, 00:29:37.631 "r_mbytes_per_sec": 0, 00:29:37.631 "w_mbytes_per_sec": 0 00:29:37.631 }, 00:29:37.631 "claimed": true, 00:29:37.631 "claim_type": "exclusive_write", 00:29:37.631 "zoned": false, 00:29:37.631 "supported_io_types": { 00:29:37.631 "read": true, 00:29:37.631 "write": true, 00:29:37.631 "unmap": true, 00:29:37.631 "write_zeroes": true, 00:29:37.631 "flush": true, 00:29:37.631 "reset": true, 00:29:37.631 "compare": false, 00:29:37.631 "compare_and_write": false, 00:29:37.631 "abort": true, 00:29:37.631 "nvme_admin": false, 00:29:37.631 "nvme_io": false 00:29:37.631 }, 00:29:37.631 "memory_domains": [ 00:29:37.631 { 00:29:37.631 "dma_device_id": "system", 00:29:37.631 "dma_device_type": 1 00:29:37.631 }, 00:29:37.631 { 00:29:37.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:37.631 "dma_device_type": 2 00:29:37.631 } 00:29:37.631 ], 00:29:37.631 "driver_specific": { 00:29:37.631 "passthru": { 00:29:37.631 "name": "pt3", 00:29:37.631 "base_bdev_name": "malloc3" 00:29:37.631 } 00:29:37.631 } 00:29:37.631 }' 00:29:37.631 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:37.631 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:37.631 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:37.631 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:37.890 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:38.149 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:38.149 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:38.149 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:38.408 [2024-05-15 11:22:56.795985] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:38.408 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f0ef0881-3fb0-481e-8816-8b6923abbf34 00:29:38.408 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f0ef0881-3fb0-481e-8816-8b6923abbf34 ']' 00:29:38.408 11:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:38.666 [2024-05-15 11:22:57.047849] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:38.666 [2024-05-15 11:22:57.047888] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:38.666 [2024-05-15 11:22:57.047972] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:38.666 [2024-05-15 11:22:57.048051] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:38.666 [2024-05-15 11:22:57.048064] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:38.666 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:38.925 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:38.925 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:39.184 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:39.184 11:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:39.443 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:39.443 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.701 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.702 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.702 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.702 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.702 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:39.702 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:29:39.972 [2024-05-15 11:22:58.428037] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:39.972 [2024-05-15 11:22:58.429677] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:39.972 [2024-05-15 11:22:58.429737] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:39.972 [2024-05-15 11:22:58.429783] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:39.972 [2024-05-15 11:22:58.429901] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:39.972 [2024-05-15 11:22:58.429940] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:39.972 [2024-05-15 11:22:58.429998] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:39.972 [2024-05-15 11:22:58.430012] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:29:39.972 request: 00:29:39.972 { 00:29:39.972 "name": "raid_bdev1", 00:29:39.972 "raid_level": "raid1", 00:29:39.972 "base_bdevs": [ 00:29:39.972 "malloc1", 00:29:39.972 "malloc2", 00:29:39.972 "malloc3" 00:29:39.972 ], 00:29:39.972 "superblock": false, 00:29:39.972 "method": "bdev_raid_create", 00:29:39.972 "req_id": 1 00:29:39.972 } 00:29:39.972 Got JSON-RPC error response 00:29:39.972 response: 00:29:39.972 { 00:29:39.972 "code": -17, 00:29:39.972 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:39.972 } 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:39.972 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:40.272 [2024-05-15 11:22:58.868077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:40.272 [2024-05-15 11:22:58.868183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.272 [2024-05-15 11:22:58.868234] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:29:40.272 [2024-05-15 11:22:58.868274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.272 [2024-05-15 11:22:58.870703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.272 [2024-05-15 11:22:58.870772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:40.272 [2024-05-15 11:22:58.870975] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:29:40.272 [2024-05-15 11:22:58.871064] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:40.272 pt1 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.272 11:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.531 11:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:40.532 "name": "raid_bdev1", 00:29:40.532 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:40.532 "strip_size_kb": 0, 00:29:40.532 "state": "configuring", 00:29:40.532 "raid_level": "raid1", 00:29:40.532 "superblock": true, 00:29:40.532 "num_base_bdevs": 3, 00:29:40.532 "num_base_bdevs_discovered": 1, 00:29:40.532 "num_base_bdevs_operational": 3, 00:29:40.532 "base_bdevs_list": [ 00:29:40.532 { 00:29:40.532 "name": "pt1", 00:29:40.532 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:40.532 "is_configured": true, 00:29:40.532 "data_offset": 2048, 00:29:40.532 "data_size": 63488 00:29:40.532 }, 00:29:40.532 { 00:29:40.532 "name": null, 00:29:40.532 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:40.532 "is_configured": false, 00:29:40.532 "data_offset": 2048, 00:29:40.532 "data_size": 63488 00:29:40.532 }, 00:29:40.532 { 00:29:40.532 "name": null, 00:29:40.532 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:40.532 "is_configured": false, 00:29:40.532 "data_offset": 2048, 00:29:40.532 "data_size": 63488 00:29:40.532 } 00:29:40.532 ] 00:29:40.532 }' 00:29:40.532 11:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:40.532 11:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.467 11:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:29:41.467 11:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:41.467 [2024-05-15 11:22:59.936344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:41.467 [2024-05-15 11:22:59.936430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.467 [2024-05-15 11:22:59.936479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:29:41.467 [2024-05-15 11:22:59.936502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.467 [2024-05-15 11:22:59.936954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.467 [2024-05-15 11:22:59.937165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:41.467 [2024-05-15 11:22:59.937304] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:41.467 [2024-05-15 11:22:59.937343] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:41.467 pt2 00:29:41.467 11:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:41.726 [2024-05-15 11:23:00.136368] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:41.726 "name": "raid_bdev1", 00:29:41.726 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:41.726 "strip_size_kb": 0, 00:29:41.726 "state": "configuring", 00:29:41.726 "raid_level": "raid1", 00:29:41.726 "superblock": true, 00:29:41.726 "num_base_bdevs": 3, 00:29:41.726 "num_base_bdevs_discovered": 1, 00:29:41.726 "num_base_bdevs_operational": 3, 00:29:41.726 "base_bdevs_list": [ 00:29:41.726 { 00:29:41.726 "name": "pt1", 00:29:41.726 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:41.726 "is_configured": true, 00:29:41.726 "data_offset": 2048, 00:29:41.726 "data_size": 63488 00:29:41.726 }, 00:29:41.726 { 00:29:41.726 "name": null, 00:29:41.726 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:41.726 "is_configured": false, 00:29:41.726 "data_offset": 2048, 00:29:41.726 "data_size": 63488 00:29:41.726 }, 00:29:41.726 { 00:29:41.726 "name": null, 00:29:41.726 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:41.726 "is_configured": false, 00:29:41.726 "data_offset": 2048, 00:29:41.726 "data_size": 63488 00:29:41.726 } 00:29:41.726 ] 00:29:41.726 }' 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:41.726 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.660 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:42.660 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:42.660 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:42.660 [2024-05-15 11:23:01.252500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:42.660 [2024-05-15 11:23:01.252608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.660 [2024-05-15 11:23:01.252653] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:29:42.660 [2024-05-15 11:23:01.252683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.660 [2024-05-15 11:23:01.253281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.660 [2024-05-15 11:23:01.253346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:42.660 [2024-05-15 11:23:01.253472] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:42.661 [2024-05-15 11:23:01.253507] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:42.661 pt2 00:29:42.661 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:42.661 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:42.661 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:42.919 [2024-05-15 11:23:01.448534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:42.919 [2024-05-15 11:23:01.448622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.919 [2024-05-15 11:23:01.448670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:29:42.919 [2024-05-15 11:23:01.448702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.919 [2024-05-15 11:23:01.449243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.919 [2024-05-15 11:23:01.449296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:42.919 [2024-05-15 11:23:01.449400] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:29:42.919 [2024-05-15 11:23:01.449429] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:42.919 [2024-05-15 11:23:01.449522] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:29:42.919 [2024-05-15 11:23:01.449536] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:42.919 [2024-05-15 11:23:01.449616] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:42.919 [2024-05-15 11:23:01.449853] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:29:42.919 [2024-05-15 11:23:01.449871] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:29:42.919 [2024-05-15 11:23:01.449970] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.919 pt3 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.919 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.194 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:43.194 "name": "raid_bdev1", 00:29:43.194 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:43.194 "strip_size_kb": 0, 00:29:43.194 "state": "online", 00:29:43.194 "raid_level": "raid1", 00:29:43.194 "superblock": true, 00:29:43.194 "num_base_bdevs": 3, 00:29:43.194 "num_base_bdevs_discovered": 3, 00:29:43.194 "num_base_bdevs_operational": 3, 00:29:43.194 "base_bdevs_list": [ 00:29:43.194 { 00:29:43.194 "name": "pt1", 00:29:43.194 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:43.194 "is_configured": true, 00:29:43.194 "data_offset": 2048, 00:29:43.194 "data_size": 63488 00:29:43.194 }, 00:29:43.194 { 00:29:43.194 "name": "pt2", 00:29:43.194 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:43.194 "is_configured": true, 00:29:43.194 "data_offset": 2048, 00:29:43.194 "data_size": 63488 00:29:43.194 }, 00:29:43.194 { 00:29:43.194 "name": "pt3", 00:29:43.194 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:43.194 "is_configured": true, 00:29:43.194 "data_offset": 2048, 00:29:43.194 "data_size": 63488 00:29:43.194 } 00:29:43.194 ] 00:29:43.194 }' 00:29:43.194 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:43.194 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:29:43.780 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:44.039 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:29:44.039 [2024-05-15 11:23:02.640893] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:44.039 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:29:44.039 "name": "raid_bdev1", 00:29:44.039 "aliases": [ 00:29:44.039 "f0ef0881-3fb0-481e-8816-8b6923abbf34" 00:29:44.039 ], 00:29:44.039 "product_name": "Raid Volume", 00:29:44.039 "block_size": 512, 00:29:44.039 "num_blocks": 63488, 00:29:44.039 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:44.039 "assigned_rate_limits": { 00:29:44.039 "rw_ios_per_sec": 0, 00:29:44.039 "rw_mbytes_per_sec": 0, 00:29:44.039 "r_mbytes_per_sec": 0, 00:29:44.039 "w_mbytes_per_sec": 0 00:29:44.039 }, 00:29:44.039 "claimed": false, 00:29:44.039 "zoned": false, 00:29:44.039 "supported_io_types": { 00:29:44.039 "read": true, 00:29:44.039 "write": true, 00:29:44.039 "unmap": false, 00:29:44.039 "write_zeroes": true, 00:29:44.039 "flush": false, 00:29:44.039 "reset": true, 00:29:44.039 "compare": false, 00:29:44.039 "compare_and_write": false, 00:29:44.039 "abort": false, 00:29:44.039 "nvme_admin": false, 00:29:44.039 "nvme_io": false 00:29:44.039 }, 00:29:44.039 "memory_domains": [ 00:29:44.039 { 00:29:44.039 "dma_device_id": "system", 00:29:44.039 "dma_device_type": 1 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:44.039 "dma_device_type": 2 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "dma_device_id": "system", 00:29:44.039 "dma_device_type": 1 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:44.039 "dma_device_type": 2 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "dma_device_id": "system", 00:29:44.039 "dma_device_type": 1 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:44.039 "dma_device_type": 2 00:29:44.039 } 00:29:44.039 ], 00:29:44.039 "driver_specific": { 00:29:44.039 "raid": { 00:29:44.039 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:44.039 "strip_size_kb": 0, 00:29:44.039 "state": "online", 00:29:44.039 "raid_level": "raid1", 00:29:44.039 "superblock": true, 00:29:44.039 "num_base_bdevs": 3, 00:29:44.039 "num_base_bdevs_discovered": 3, 00:29:44.039 "num_base_bdevs_operational": 3, 00:29:44.039 "base_bdevs_list": [ 00:29:44.039 { 00:29:44.039 "name": "pt1", 00:29:44.039 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:44.039 "is_configured": true, 00:29:44.039 "data_offset": 2048, 00:29:44.039 "data_size": 63488 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "name": "pt2", 00:29:44.039 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:44.039 "is_configured": true, 00:29:44.039 "data_offset": 2048, 00:29:44.039 "data_size": 63488 00:29:44.039 }, 00:29:44.039 { 00:29:44.039 "name": "pt3", 00:29:44.039 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:44.039 "is_configured": true, 00:29:44.039 "data_offset": 2048, 00:29:44.039 "data_size": 63488 00:29:44.039 } 00:29:44.039 ] 00:29:44.039 } 00:29:44.039 } 00:29:44.039 }' 00:29:44.039 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:29:44.298 pt2 00:29:44.298 pt3' 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:44.298 "name": "pt1", 00:29:44.298 "aliases": [ 00:29:44.298 "0f9cfc0a-f79a-5fcf-90ba-ad3531370748" 00:29:44.298 ], 00:29:44.298 "product_name": "passthru", 00:29:44.298 "block_size": 512, 00:29:44.298 "num_blocks": 65536, 00:29:44.298 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:44.298 "assigned_rate_limits": { 00:29:44.298 "rw_ios_per_sec": 0, 00:29:44.298 "rw_mbytes_per_sec": 0, 00:29:44.298 "r_mbytes_per_sec": 0, 00:29:44.298 "w_mbytes_per_sec": 0 00:29:44.298 }, 00:29:44.298 "claimed": true, 00:29:44.298 "claim_type": "exclusive_write", 00:29:44.298 "zoned": false, 00:29:44.298 "supported_io_types": { 00:29:44.298 "read": true, 00:29:44.298 "write": true, 00:29:44.298 "unmap": true, 00:29:44.298 "write_zeroes": true, 00:29:44.298 "flush": true, 00:29:44.298 "reset": true, 00:29:44.298 "compare": false, 00:29:44.298 "compare_and_write": false, 00:29:44.298 "abort": true, 00:29:44.298 "nvme_admin": false, 00:29:44.298 "nvme_io": false 00:29:44.298 }, 00:29:44.298 "memory_domains": [ 00:29:44.298 { 00:29:44.298 "dma_device_id": "system", 00:29:44.298 "dma_device_type": 1 00:29:44.298 }, 00:29:44.298 { 00:29:44.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:44.298 "dma_device_type": 2 00:29:44.298 } 00:29:44.298 ], 00:29:44.298 "driver_specific": { 00:29:44.298 "passthru": { 00:29:44.298 "name": "pt1", 00:29:44.298 "base_bdev_name": "malloc1" 00:29:44.298 } 00:29:44.298 } 00:29:44.298 }' 00:29:44.298 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:44.556 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:44.556 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:44.556 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:44.556 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:44.556 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:44.556 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:44.814 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:45.073 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:45.073 "name": "pt2", 00:29:45.073 "aliases": [ 00:29:45.073 "c14adf7d-a7f7-588e-ad48-204700890c02" 00:29:45.073 ], 00:29:45.073 "product_name": "passthru", 00:29:45.073 "block_size": 512, 00:29:45.073 "num_blocks": 65536, 00:29:45.073 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:45.073 "assigned_rate_limits": { 00:29:45.073 "rw_ios_per_sec": 0, 00:29:45.073 "rw_mbytes_per_sec": 0, 00:29:45.073 "r_mbytes_per_sec": 0, 00:29:45.073 "w_mbytes_per_sec": 0 00:29:45.073 }, 00:29:45.073 "claimed": true, 00:29:45.073 "claim_type": "exclusive_write", 00:29:45.073 "zoned": false, 00:29:45.073 "supported_io_types": { 00:29:45.073 "read": true, 00:29:45.073 "write": true, 00:29:45.073 "unmap": true, 00:29:45.073 "write_zeroes": true, 00:29:45.073 "flush": true, 00:29:45.073 "reset": true, 00:29:45.073 "compare": false, 00:29:45.073 "compare_and_write": false, 00:29:45.073 "abort": true, 00:29:45.073 "nvme_admin": false, 00:29:45.073 "nvme_io": false 00:29:45.073 }, 00:29:45.073 "memory_domains": [ 00:29:45.073 { 00:29:45.073 "dma_device_id": "system", 00:29:45.073 "dma_device_type": 1 00:29:45.073 }, 00:29:45.073 { 00:29:45.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:45.073 "dma_device_type": 2 00:29:45.073 } 00:29:45.073 ], 00:29:45.073 "driver_specific": { 00:29:45.073 "passthru": { 00:29:45.073 "name": "pt2", 00:29:45.073 "base_bdev_name": "malloc2" 00:29:45.073 } 00:29:45.073 } 00:29:45.073 }' 00:29:45.073 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:45.332 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:45.591 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:45.591 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:29:45.849 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:29:45.849 "name": "pt3", 00:29:45.849 "aliases": [ 00:29:45.849 "22f776ce-31e9-5abf-854b-c5ecae3a6b02" 00:29:45.849 ], 00:29:45.849 "product_name": "passthru", 00:29:45.849 "block_size": 512, 00:29:45.849 "num_blocks": 65536, 00:29:45.849 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:45.849 "assigned_rate_limits": { 00:29:45.849 "rw_ios_per_sec": 0, 00:29:45.849 "rw_mbytes_per_sec": 0, 00:29:45.849 "r_mbytes_per_sec": 0, 00:29:45.849 "w_mbytes_per_sec": 0 00:29:45.849 }, 00:29:45.849 "claimed": true, 00:29:45.849 "claim_type": "exclusive_write", 00:29:45.849 "zoned": false, 00:29:45.849 "supported_io_types": { 00:29:45.849 "read": true, 00:29:45.849 "write": true, 00:29:45.849 "unmap": true, 00:29:45.849 "write_zeroes": true, 00:29:45.849 "flush": true, 00:29:45.849 "reset": true, 00:29:45.849 "compare": false, 00:29:45.849 "compare_and_write": false, 00:29:45.849 "abort": true, 00:29:45.849 "nvme_admin": false, 00:29:45.849 "nvme_io": false 00:29:45.849 }, 00:29:45.849 "memory_domains": [ 00:29:45.849 { 00:29:45.849 "dma_device_id": "system", 00:29:45.849 "dma_device_type": 1 00:29:45.849 }, 00:29:45.849 { 00:29:45.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:45.849 "dma_device_type": 2 00:29:45.849 } 00:29:45.849 ], 00:29:45.849 "driver_specific": { 00:29:45.849 "passthru": { 00:29:45.849 "name": "pt3", 00:29:45.849 "base_bdev_name": "malloc3" 00:29:45.849 } 00:29:45.849 } 00:29:45.849 }' 00:29:45.849 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:45.849 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:46.107 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:46.366 11:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:46.625 [2024-05-15 11:23:05.117541] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:46.625 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f0ef0881-3fb0-481e-8816-8b6923abbf34 '!=' f0ef0881-3fb0-481e-8816-8b6923abbf34 ']' 00:29:46.625 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:29:46.625 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:29:46.625 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:29:46.625 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:46.883 [2024-05-15 11:23:05.365495] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.883 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.142 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:47.142 "name": "raid_bdev1", 00:29:47.142 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:47.142 "strip_size_kb": 0, 00:29:47.142 "state": "online", 00:29:47.142 "raid_level": "raid1", 00:29:47.142 "superblock": true, 00:29:47.142 "num_base_bdevs": 3, 00:29:47.142 "num_base_bdevs_discovered": 2, 00:29:47.142 "num_base_bdevs_operational": 2, 00:29:47.142 "base_bdevs_list": [ 00:29:47.142 { 00:29:47.142 "name": null, 00:29:47.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.142 "is_configured": false, 00:29:47.142 "data_offset": 2048, 00:29:47.142 "data_size": 63488 00:29:47.142 }, 00:29:47.142 { 00:29:47.142 "name": "pt2", 00:29:47.142 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:47.142 "is_configured": true, 00:29:47.142 "data_offset": 2048, 00:29:47.142 "data_size": 63488 00:29:47.142 }, 00:29:47.142 { 00:29:47.142 "name": "pt3", 00:29:47.142 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:47.142 "is_configured": true, 00:29:47.142 "data_offset": 2048, 00:29:47.142 "data_size": 63488 00:29:47.142 } 00:29:47.142 ] 00:29:47.142 }' 00:29:47.142 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:47.142 11:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.714 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:47.994 [2024-05-15 11:23:06.605618] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:47.994 [2024-05-15 11:23:06.605677] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:47.994 [2024-05-15 11:23:06.605755] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:47.994 [2024-05-15 11:23:06.605800] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:47.994 [2024-05-15 11:23:06.605811] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:29:47.994 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.994 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:48.253 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:48.253 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:48.253 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:48.253 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:48.253 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:48.512 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:48.512 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:48.512 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:48.770 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:48.770 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:48.770 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:48.770 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:48.770 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:49.029 [2024-05-15 11:23:07.453729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:49.029 [2024-05-15 11:23:07.453968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:49.029 [2024-05-15 11:23:07.454033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033080 00:29:49.029 [2024-05-15 11:23:07.454068] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:49.029 [2024-05-15 11:23:07.455769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:49.030 [2024-05-15 11:23:07.455823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:49.030 [2024-05-15 11:23:07.455928] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:49.030 [2024-05-15 11:23:07.455983] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:49.030 pt2 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.030 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.288 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:49.288 "name": "raid_bdev1", 00:29:49.288 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:49.288 "strip_size_kb": 0, 00:29:49.288 "state": "configuring", 00:29:49.288 "raid_level": "raid1", 00:29:49.288 "superblock": true, 00:29:49.288 "num_base_bdevs": 3, 00:29:49.288 "num_base_bdevs_discovered": 1, 00:29:49.288 "num_base_bdevs_operational": 2, 00:29:49.288 "base_bdevs_list": [ 00:29:49.288 { 00:29:49.288 "name": null, 00:29:49.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.288 "is_configured": false, 00:29:49.288 "data_offset": 2048, 00:29:49.288 "data_size": 63488 00:29:49.288 }, 00:29:49.288 { 00:29:49.288 "name": "pt2", 00:29:49.288 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:49.288 "is_configured": true, 00:29:49.288 "data_offset": 2048, 00:29:49.288 "data_size": 63488 00:29:49.288 }, 00:29:49.288 { 00:29:49.288 "name": null, 00:29:49.288 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:49.288 "is_configured": false, 00:29:49.288 "data_offset": 2048, 00:29:49.288 "data_size": 63488 00:29:49.288 } 00:29:49.288 ] 00:29:49.288 }' 00:29:49.288 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:49.288 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.854 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:29:49.854 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:49.854 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:29:49.854 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:50.113 [2024-05-15 11:23:08.610011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:50.113 [2024-05-15 11:23:08.610099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.113 [2024-05-15 11:23:08.610151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034880 00:29:50.113 [2024-05-15 11:23:08.610179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.113 [2024-05-15 11:23:08.610551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.113 [2024-05-15 11:23:08.610584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:50.113 [2024-05-15 11:23:08.610702] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:29:50.113 [2024-05-15 11:23:08.610746] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:50.113 [2024-05-15 11:23:08.610832] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:29:50.113 [2024-05-15 11:23:08.611035] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:50.113 [2024-05-15 11:23:08.611132] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:50.114 [2024-05-15 11:23:08.611362] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:29:50.114 [2024-05-15 11:23:08.611378] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:29:50.114 [2024-05-15 11:23:08.611503] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.114 pt3 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.114 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.372 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:50.372 "name": "raid_bdev1", 00:29:50.372 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:50.372 "strip_size_kb": 0, 00:29:50.372 "state": "online", 00:29:50.372 "raid_level": "raid1", 00:29:50.372 "superblock": true, 00:29:50.372 "num_base_bdevs": 3, 00:29:50.372 "num_base_bdevs_discovered": 2, 00:29:50.372 "num_base_bdevs_operational": 2, 00:29:50.372 "base_bdevs_list": [ 00:29:50.372 { 00:29:50.372 "name": null, 00:29:50.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.372 "is_configured": false, 00:29:50.372 "data_offset": 2048, 00:29:50.372 "data_size": 63488 00:29:50.372 }, 00:29:50.372 { 00:29:50.372 "name": "pt2", 00:29:50.372 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:50.372 "is_configured": true, 00:29:50.372 "data_offset": 2048, 00:29:50.372 "data_size": 63488 00:29:50.372 }, 00:29:50.372 { 00:29:50.372 "name": "pt3", 00:29:50.372 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:50.372 "is_configured": true, 00:29:50.372 "data_offset": 2048, 00:29:50.372 "data_size": 63488 00:29:50.372 } 00:29:50.372 ] 00:29:50.372 }' 00:29:50.372 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:50.372 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.313 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 3 -gt 2 ']' 00:29:51.313 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:51.313 [2024-05-15 11:23:09.806207] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:51.313 [2024-05-15 11:23:09.806245] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:51.313 [2024-05-15 11:23:09.806315] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:51.313 [2024-05-15 11:23:09.806363] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:51.313 [2024-05-15 11:23:09.806375] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:29:51.313 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.313 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:29:51.585 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:29:51.585 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:29:51.585 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:51.844 [2024-05-15 11:23:10.326282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:51.844 [2024-05-15 11:23:10.326382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.844 [2024-05-15 11:23:10.326452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035d80 00:29:51.844 [2024-05-15 11:23:10.326481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.844 [2024-05-15 11:23:10.329144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.844 [2024-05-15 11:23:10.329200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:51.844 [2024-05-15 11:23:10.329329] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:29:51.844 [2024-05-15 11:23:10.329395] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:51.844 pt1 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.844 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.102 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:52.102 "name": "raid_bdev1", 00:29:52.102 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:52.102 "strip_size_kb": 0, 00:29:52.102 "state": "configuring", 00:29:52.102 "raid_level": "raid1", 00:29:52.102 "superblock": true, 00:29:52.102 "num_base_bdevs": 3, 00:29:52.102 "num_base_bdevs_discovered": 1, 00:29:52.102 "num_base_bdevs_operational": 3, 00:29:52.102 "base_bdevs_list": [ 00:29:52.102 { 00:29:52.102 "name": "pt1", 00:29:52.102 "uuid": "0f9cfc0a-f79a-5fcf-90ba-ad3531370748", 00:29:52.102 "is_configured": true, 00:29:52.102 "data_offset": 2048, 00:29:52.102 "data_size": 63488 00:29:52.102 }, 00:29:52.102 { 00:29:52.102 "name": null, 00:29:52.102 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:52.102 "is_configured": false, 00:29:52.102 "data_offset": 2048, 00:29:52.103 "data_size": 63488 00:29:52.103 }, 00:29:52.103 { 00:29:52.103 "name": null, 00:29:52.103 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:52.103 "is_configured": false, 00:29:52.103 "data_offset": 2048, 00:29:52.103 "data_size": 63488 00:29:52.103 } 00:29:52.103 ] 00:29:52.103 }' 00:29:52.103 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:52.103 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.669 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:29:52.669 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:29:52.669 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:52.927 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:29:52.928 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:29:52.928 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:53.187 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:29:53.187 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:29:53.187 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=2 00:29:53.187 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:53.446 [2024-05-15 11:23:11.974565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:53.446 [2024-05-15 11:23:11.974724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.446 [2024-05-15 11:23:11.974770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037580 00:29:53.446 [2024-05-15 11:23:11.974820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.446 [2024-05-15 11:23:11.975418] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.446 [2024-05-15 11:23:11.975465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:53.446 [2024-05-15 11:23:11.975582] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:29:53.446 [2024-05-15 11:23:11.975600] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:53.446 [2024-05-15 11:23:11.975608] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:53.446 [2024-05-15 11:23:11.975634] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state configuring 00:29:53.446 [2024-05-15 11:23:11.975711] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:53.446 pt3 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.446 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.705 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:53.705 "name": "raid_bdev1", 00:29:53.705 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:53.705 "strip_size_kb": 0, 00:29:53.705 "state": "configuring", 00:29:53.705 "raid_level": "raid1", 00:29:53.705 "superblock": true, 00:29:53.705 "num_base_bdevs": 3, 00:29:53.705 "num_base_bdevs_discovered": 1, 00:29:53.705 "num_base_bdevs_operational": 2, 00:29:53.705 "base_bdevs_list": [ 00:29:53.705 { 00:29:53.705 "name": null, 00:29:53.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.705 "is_configured": false, 00:29:53.705 "data_offset": 2048, 00:29:53.705 "data_size": 63488 00:29:53.705 }, 00:29:53.705 { 00:29:53.705 "name": null, 00:29:53.705 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:53.705 "is_configured": false, 00:29:53.705 "data_offset": 2048, 00:29:53.705 "data_size": 63488 00:29:53.705 }, 00:29:53.705 { 00:29:53.705 "name": "pt3", 00:29:53.705 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:53.705 "is_configured": true, 00:29:53.705 "data_offset": 2048, 00:29:53.705 "data_size": 63488 00:29:53.705 } 00:29:53.705 ] 00:29:53.705 }' 00:29:53.705 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:53.705 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:54.646 [2024-05-15 11:23:13.242851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:54.646 [2024-05-15 11:23:13.242950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.646 [2024-05-15 11:23:13.243001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038a80 00:29:54.646 [2024-05-15 11:23:13.243036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.646 [2024-05-15 11:23:13.243410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.646 [2024-05-15 11:23:13.243450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:54.646 [2024-05-15 11:23:13.243553] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:54.646 [2024-05-15 11:23:13.243603] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:54.646 [2024-05-15 11:23:13.243697] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012300 00:29:54.646 [2024-05-15 11:23:13.243712] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:54.646 [2024-05-15 11:23:13.243803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:54.646 [2024-05-15 11:23:13.244042] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012300 00:29:54.646 [2024-05-15 11:23:13.244061] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012300 00:29:54.646 [2024-05-15 11:23:13.244171] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.646 pt2 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.646 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.907 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:54.907 "name": "raid_bdev1", 00:29:54.907 "uuid": "f0ef0881-3fb0-481e-8816-8b6923abbf34", 00:29:54.907 "strip_size_kb": 0, 00:29:54.907 "state": "online", 00:29:54.907 "raid_level": "raid1", 00:29:54.907 "superblock": true, 00:29:54.907 "num_base_bdevs": 3, 00:29:54.907 "num_base_bdevs_discovered": 2, 00:29:54.907 "num_base_bdevs_operational": 2, 00:29:54.907 "base_bdevs_list": [ 00:29:54.907 { 00:29:54.907 "name": null, 00:29:54.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.907 "is_configured": false, 00:29:54.907 "data_offset": 2048, 00:29:54.907 "data_size": 63488 00:29:54.907 }, 00:29:54.907 { 00:29:54.907 "name": "pt2", 00:29:54.907 "uuid": "c14adf7d-a7f7-588e-ad48-204700890c02", 00:29:54.907 "is_configured": true, 00:29:54.907 "data_offset": 2048, 00:29:54.907 "data_size": 63488 00:29:54.907 }, 00:29:54.907 { 00:29:54.907 "name": "pt3", 00:29:54.907 "uuid": "22f776ce-31e9-5abf-854b-c5ecae3a6b02", 00:29:54.907 "is_configured": true, 00:29:54.907 "data_offset": 2048, 00:29:54.907 "data_size": 63488 00:29:54.907 } 00:29:54.907 ] 00:29:54.907 }' 00:29:54.907 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:54.907 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:29:55.852 [2024-05-15 11:23:14.355258] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' f0ef0881-3fb0-481e-8816-8b6923abbf34 '!=' f0ef0881-3fb0-481e-8816-8b6923abbf34 ']' 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 63681 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 63681 ']' 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 63681 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63681 00:29:55.852 killing process with pid 63681 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63681' 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 63681 00:29:55.852 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 63681 00:29:55.852 [2024-05-15 11:23:14.401605] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:55.852 [2024-05-15 11:23:14.401706] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:55.852 [2024-05-15 11:23:14.401756] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:55.852 [2024-05-15 11:23:14.401768] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012300 name raid_bdev1, state offline 00:29:56.111 [2024-05-15 11:23:14.662453] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:57.486 11:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:29:57.486 00:29:57.486 real 0m25.326s 00:29:57.486 user 0m47.453s 00:29:57.486 sys 0m2.495s 00:29:57.486 11:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:57.486 11:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.486 ************************************ 00:29:57.486 END TEST raid_superblock_test 00:29:57.486 ************************************ 00:29:57.486 11:23:16 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:29:57.486 11:23:16 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:29:57.486 11:23:16 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:29:57.486 11:23:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:29:57.486 11:23:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:57.486 11:23:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:57.486 ************************************ 00:29:57.486 START TEST raid_state_function_test 00:29:57.486 ************************************ 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:29:57.486 Process raid pid: 64482 00:29:57.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=64482 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64482' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 64482 /var/tmp/spdk-raid.sock 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 64482 ']' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.486 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.745 [2024-05-15 11:23:16.164451] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:29:57.745 [2024-05-15 11:23:16.164649] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.745 [2024-05-15 11:23:16.323772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.004 [2024-05-15 11:23:16.571252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.263 [2024-05-15 11:23:16.780265] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:58.521 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:58.521 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:29:58.521 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:58.781 [2024-05-15 11:23:17.207923] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:58.781 [2024-05-15 11:23:17.208010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:58.781 [2024-05-15 11:23:17.208026] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:58.781 [2024-05-15 11:23:17.208046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:58.781 [2024-05-15 11:23:17.208055] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:58.781 [2024-05-15 11:23:17.208099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:58.781 [2024-05-15 11:23:17.208111] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:58.781 [2024-05-15 11:23:17.208134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:58.781 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.038 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:59.038 "name": "Existed_Raid", 00:29:59.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.038 "strip_size_kb": 64, 00:29:59.038 "state": "configuring", 00:29:59.038 "raid_level": "raid0", 00:29:59.038 "superblock": false, 00:29:59.038 "num_base_bdevs": 4, 00:29:59.039 "num_base_bdevs_discovered": 0, 00:29:59.039 "num_base_bdevs_operational": 4, 00:29:59.039 "base_bdevs_list": [ 00:29:59.039 { 00:29:59.039 "name": "BaseBdev1", 00:29:59.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.039 "is_configured": false, 00:29:59.039 "data_offset": 0, 00:29:59.039 "data_size": 0 00:29:59.039 }, 00:29:59.039 { 00:29:59.039 "name": "BaseBdev2", 00:29:59.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.039 "is_configured": false, 00:29:59.039 "data_offset": 0, 00:29:59.039 "data_size": 0 00:29:59.039 }, 00:29:59.039 { 00:29:59.039 "name": "BaseBdev3", 00:29:59.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.039 "is_configured": false, 00:29:59.039 "data_offset": 0, 00:29:59.039 "data_size": 0 00:29:59.039 }, 00:29:59.039 { 00:29:59.039 "name": "BaseBdev4", 00:29:59.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.039 "is_configured": false, 00:29:59.039 "data_offset": 0, 00:29:59.039 "data_size": 0 00:29:59.039 } 00:29:59.039 ] 00:29:59.039 }' 00:29:59.039 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:59.039 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.603 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:59.861 [2024-05-15 11:23:18.292006] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:59.861 [2024-05-15 11:23:18.292048] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:29:59.861 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:59.861 [2024-05-15 11:23:18.488066] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:59.861 [2024-05-15 11:23:18.488215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:59.861 [2024-05-15 11:23:18.488247] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:59.861 [2024-05-15 11:23:18.488277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:59.861 [2024-05-15 11:23:18.488287] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:59.861 [2024-05-15 11:23:18.488306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:59.861 [2024-05-15 11:23:18.488315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:59.861 [2024-05-15 11:23:18.488343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:00.119 [2024-05-15 11:23:18.730478] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:00.119 BaseBdev1 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:00.119 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:00.377 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:00.635 [ 00:30:00.636 { 00:30:00.636 "name": "BaseBdev1", 00:30:00.636 "aliases": [ 00:30:00.636 "57d27982-3f22-4928-9b67-758996034153" 00:30:00.636 ], 00:30:00.636 "product_name": "Malloc disk", 00:30:00.636 "block_size": 512, 00:30:00.636 "num_blocks": 65536, 00:30:00.636 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:00.636 "assigned_rate_limits": { 00:30:00.636 "rw_ios_per_sec": 0, 00:30:00.636 "rw_mbytes_per_sec": 0, 00:30:00.636 "r_mbytes_per_sec": 0, 00:30:00.636 "w_mbytes_per_sec": 0 00:30:00.636 }, 00:30:00.636 "claimed": true, 00:30:00.636 "claim_type": "exclusive_write", 00:30:00.636 "zoned": false, 00:30:00.636 "supported_io_types": { 00:30:00.636 "read": true, 00:30:00.636 "write": true, 00:30:00.636 "unmap": true, 00:30:00.636 "write_zeroes": true, 00:30:00.636 "flush": true, 00:30:00.636 "reset": true, 00:30:00.636 "compare": false, 00:30:00.636 "compare_and_write": false, 00:30:00.636 "abort": true, 00:30:00.636 "nvme_admin": false, 00:30:00.636 "nvme_io": false 00:30:00.636 }, 00:30:00.636 "memory_domains": [ 00:30:00.636 { 00:30:00.636 "dma_device_id": "system", 00:30:00.636 "dma_device_type": 1 00:30:00.636 }, 00:30:00.636 { 00:30:00.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:00.636 "dma_device_type": 2 00:30:00.636 } 00:30:00.636 ], 00:30:00.636 "driver_specific": {} 00:30:00.636 } 00:30:00.636 ] 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.636 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.894 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:00.894 "name": "Existed_Raid", 00:30:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.894 "strip_size_kb": 64, 00:30:00.894 "state": "configuring", 00:30:00.894 "raid_level": "raid0", 00:30:00.894 "superblock": false, 00:30:00.894 "num_base_bdevs": 4, 00:30:00.894 "num_base_bdevs_discovered": 1, 00:30:00.894 "num_base_bdevs_operational": 4, 00:30:00.894 "base_bdevs_list": [ 00:30:00.894 { 00:30:00.894 "name": "BaseBdev1", 00:30:00.894 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:00.894 "is_configured": true, 00:30:00.894 "data_offset": 0, 00:30:00.894 "data_size": 65536 00:30:00.894 }, 00:30:00.894 { 00:30:00.894 "name": "BaseBdev2", 00:30:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.894 "is_configured": false, 00:30:00.894 "data_offset": 0, 00:30:00.894 "data_size": 0 00:30:00.894 }, 00:30:00.894 { 00:30:00.894 "name": "BaseBdev3", 00:30:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.894 "is_configured": false, 00:30:00.895 "data_offset": 0, 00:30:00.895 "data_size": 0 00:30:00.895 }, 00:30:00.895 { 00:30:00.895 "name": "BaseBdev4", 00:30:00.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.895 "is_configured": false, 00:30:00.895 "data_offset": 0, 00:30:00.895 "data_size": 0 00:30:00.895 } 00:30:00.895 ] 00:30:00.895 }' 00:30:00.895 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:00.895 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.461 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:01.719 [2024-05-15 11:23:20.246778] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:01.719 [2024-05-15 11:23:20.246837] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:30:01.719 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:01.976 [2024-05-15 11:23:20.482884] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:01.976 [2024-05-15 11:23:20.484566] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:01.976 [2024-05-15 11:23:20.484684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:01.976 [2024-05-15 11:23:20.484707] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:01.976 [2024-05-15 11:23:20.484735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:01.976 [2024-05-15 11:23:20.484745] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:01.976 [2024-05-15 11:23:20.484765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.976 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.234 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:02.234 "name": "Existed_Raid", 00:30:02.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.234 "strip_size_kb": 64, 00:30:02.234 "state": "configuring", 00:30:02.234 "raid_level": "raid0", 00:30:02.234 "superblock": false, 00:30:02.234 "num_base_bdevs": 4, 00:30:02.234 "num_base_bdevs_discovered": 1, 00:30:02.234 "num_base_bdevs_operational": 4, 00:30:02.234 "base_bdevs_list": [ 00:30:02.234 { 00:30:02.234 "name": "BaseBdev1", 00:30:02.234 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:02.234 "is_configured": true, 00:30:02.234 "data_offset": 0, 00:30:02.234 "data_size": 65536 00:30:02.234 }, 00:30:02.234 { 00:30:02.234 "name": "BaseBdev2", 00:30:02.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.234 "is_configured": false, 00:30:02.234 "data_offset": 0, 00:30:02.234 "data_size": 0 00:30:02.234 }, 00:30:02.234 { 00:30:02.234 "name": "BaseBdev3", 00:30:02.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.234 "is_configured": false, 00:30:02.234 "data_offset": 0, 00:30:02.234 "data_size": 0 00:30:02.234 }, 00:30:02.234 { 00:30:02.234 "name": "BaseBdev4", 00:30:02.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.234 "is_configured": false, 00:30:02.234 "data_offset": 0, 00:30:02.234 "data_size": 0 00:30:02.234 } 00:30:02.234 ] 00:30:02.234 }' 00:30:02.234 11:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:02.234 11:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:03.167 [2024-05-15 11:23:21.705148] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:03.167 BaseBdev2 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:03.167 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:03.424 11:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:03.683 [ 00:30:03.683 { 00:30:03.683 "name": "BaseBdev2", 00:30:03.683 "aliases": [ 00:30:03.683 "9ee96117-c8ec-4038-887a-d54c0772032d" 00:30:03.683 ], 00:30:03.683 "product_name": "Malloc disk", 00:30:03.683 "block_size": 512, 00:30:03.683 "num_blocks": 65536, 00:30:03.683 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:03.683 "assigned_rate_limits": { 00:30:03.683 "rw_ios_per_sec": 0, 00:30:03.683 "rw_mbytes_per_sec": 0, 00:30:03.683 "r_mbytes_per_sec": 0, 00:30:03.683 "w_mbytes_per_sec": 0 00:30:03.683 }, 00:30:03.683 "claimed": true, 00:30:03.683 "claim_type": "exclusive_write", 00:30:03.683 "zoned": false, 00:30:03.683 "supported_io_types": { 00:30:03.683 "read": true, 00:30:03.683 "write": true, 00:30:03.683 "unmap": true, 00:30:03.683 "write_zeroes": true, 00:30:03.683 "flush": true, 00:30:03.683 "reset": true, 00:30:03.683 "compare": false, 00:30:03.683 "compare_and_write": false, 00:30:03.683 "abort": true, 00:30:03.683 "nvme_admin": false, 00:30:03.683 "nvme_io": false 00:30:03.683 }, 00:30:03.683 "memory_domains": [ 00:30:03.683 { 00:30:03.683 "dma_device_id": "system", 00:30:03.683 "dma_device_type": 1 00:30:03.683 }, 00:30:03.683 { 00:30:03.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.683 "dma_device_type": 2 00:30:03.683 } 00:30:03.683 ], 00:30:03.683 "driver_specific": {} 00:30:03.683 } 00:30:03.683 ] 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.683 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:03.941 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:03.941 "name": "Existed_Raid", 00:30:03.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.941 "strip_size_kb": 64, 00:30:03.941 "state": "configuring", 00:30:03.941 "raid_level": "raid0", 00:30:03.941 "superblock": false, 00:30:03.941 "num_base_bdevs": 4, 00:30:03.941 "num_base_bdevs_discovered": 2, 00:30:03.941 "num_base_bdevs_operational": 4, 00:30:03.941 "base_bdevs_list": [ 00:30:03.941 { 00:30:03.941 "name": "BaseBdev1", 00:30:03.941 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:03.941 "is_configured": true, 00:30:03.941 "data_offset": 0, 00:30:03.941 "data_size": 65536 00:30:03.941 }, 00:30:03.941 { 00:30:03.941 "name": "BaseBdev2", 00:30:03.941 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:03.941 "is_configured": true, 00:30:03.941 "data_offset": 0, 00:30:03.941 "data_size": 65536 00:30:03.941 }, 00:30:03.941 { 00:30:03.941 "name": "BaseBdev3", 00:30:03.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.941 "is_configured": false, 00:30:03.941 "data_offset": 0, 00:30:03.941 "data_size": 0 00:30:03.941 }, 00:30:03.941 { 00:30:03.941 "name": "BaseBdev4", 00:30:03.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.941 "is_configured": false, 00:30:03.941 "data_offset": 0, 00:30:03.941 "data_size": 0 00:30:03.941 } 00:30:03.941 ] 00:30:03.941 }' 00:30:03.941 11:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:03.941 11:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:04.525 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:04.784 BaseBdev3 00:30:04.784 [2024-05-15 11:23:23.333091] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:04.784 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:05.042 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:05.300 [ 00:30:05.300 { 00:30:05.300 "name": "BaseBdev3", 00:30:05.300 "aliases": [ 00:30:05.300 "4c85e6bf-5070-4764-acec-6a250e221c8e" 00:30:05.300 ], 00:30:05.300 "product_name": "Malloc disk", 00:30:05.300 "block_size": 512, 00:30:05.300 "num_blocks": 65536, 00:30:05.300 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:05.300 "assigned_rate_limits": { 00:30:05.300 "rw_ios_per_sec": 0, 00:30:05.300 "rw_mbytes_per_sec": 0, 00:30:05.300 "r_mbytes_per_sec": 0, 00:30:05.300 "w_mbytes_per_sec": 0 00:30:05.300 }, 00:30:05.300 "claimed": true, 00:30:05.300 "claim_type": "exclusive_write", 00:30:05.300 "zoned": false, 00:30:05.300 "supported_io_types": { 00:30:05.300 "read": true, 00:30:05.300 "write": true, 00:30:05.300 "unmap": true, 00:30:05.300 "write_zeroes": true, 00:30:05.300 "flush": true, 00:30:05.300 "reset": true, 00:30:05.300 "compare": false, 00:30:05.300 "compare_and_write": false, 00:30:05.300 "abort": true, 00:30:05.300 "nvme_admin": false, 00:30:05.300 "nvme_io": false 00:30:05.300 }, 00:30:05.300 "memory_domains": [ 00:30:05.300 { 00:30:05.300 "dma_device_id": "system", 00:30:05.300 "dma_device_type": 1 00:30:05.300 }, 00:30:05.300 { 00:30:05.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:05.300 "dma_device_type": 2 00:30:05.300 } 00:30:05.300 ], 00:30:05.300 "driver_specific": {} 00:30:05.300 } 00:30:05.300 ] 00:30:05.300 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:05.300 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:05.300 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:05.300 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:05.301 "name": "Existed_Raid", 00:30:05.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.301 "strip_size_kb": 64, 00:30:05.301 "state": "configuring", 00:30:05.301 "raid_level": "raid0", 00:30:05.301 "superblock": false, 00:30:05.301 "num_base_bdevs": 4, 00:30:05.301 "num_base_bdevs_discovered": 3, 00:30:05.301 "num_base_bdevs_operational": 4, 00:30:05.301 "base_bdevs_list": [ 00:30:05.301 { 00:30:05.301 "name": "BaseBdev1", 00:30:05.301 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:05.301 "is_configured": true, 00:30:05.301 "data_offset": 0, 00:30:05.301 "data_size": 65536 00:30:05.301 }, 00:30:05.301 { 00:30:05.301 "name": "BaseBdev2", 00:30:05.301 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:05.301 "is_configured": true, 00:30:05.301 "data_offset": 0, 00:30:05.301 "data_size": 65536 00:30:05.301 }, 00:30:05.301 { 00:30:05.301 "name": "BaseBdev3", 00:30:05.301 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:05.301 "is_configured": true, 00:30:05.301 "data_offset": 0, 00:30:05.301 "data_size": 65536 00:30:05.301 }, 00:30:05.301 { 00:30:05.301 "name": "BaseBdev4", 00:30:05.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.301 "is_configured": false, 00:30:05.301 "data_offset": 0, 00:30:05.301 "data_size": 0 00:30:05.301 } 00:30:05.301 ] 00:30:05.301 }' 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:05.301 11:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:06.236 [2024-05-15 11:23:24.845022] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:06.236 [2024-05-15 11:23:24.845071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:30:06.236 [2024-05-15 11:23:24.845082] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:30:06.236 [2024-05-15 11:23:24.845220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:30:06.236 [2024-05-15 11:23:24.845469] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:30:06.236 [2024-05-15 11:23:24.845485] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:30:06.236 [2024-05-15 11:23:24.845684] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.236 BaseBdev4 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:06.236 11:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:06.495 11:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:06.754 [ 00:30:06.754 { 00:30:06.754 "name": "BaseBdev4", 00:30:06.754 "aliases": [ 00:30:06.754 "154455fa-f679-4be0-a2b7-c7a0b36b84e3" 00:30:06.754 ], 00:30:06.754 "product_name": "Malloc disk", 00:30:06.754 "block_size": 512, 00:30:06.754 "num_blocks": 65536, 00:30:06.754 "uuid": "154455fa-f679-4be0-a2b7-c7a0b36b84e3", 00:30:06.754 "assigned_rate_limits": { 00:30:06.754 "rw_ios_per_sec": 0, 00:30:06.754 "rw_mbytes_per_sec": 0, 00:30:06.754 "r_mbytes_per_sec": 0, 00:30:06.754 "w_mbytes_per_sec": 0 00:30:06.754 }, 00:30:06.754 "claimed": true, 00:30:06.754 "claim_type": "exclusive_write", 00:30:06.754 "zoned": false, 00:30:06.754 "supported_io_types": { 00:30:06.754 "read": true, 00:30:06.754 "write": true, 00:30:06.754 "unmap": true, 00:30:06.754 "write_zeroes": true, 00:30:06.754 "flush": true, 00:30:06.754 "reset": true, 00:30:06.754 "compare": false, 00:30:06.754 "compare_and_write": false, 00:30:06.754 "abort": true, 00:30:06.754 "nvme_admin": false, 00:30:06.754 "nvme_io": false 00:30:06.754 }, 00:30:06.754 "memory_domains": [ 00:30:06.754 { 00:30:06.754 "dma_device_id": "system", 00:30:06.754 "dma_device_type": 1 00:30:06.754 }, 00:30:06.754 { 00:30:06.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:06.754 "dma_device_type": 2 00:30:06.754 } 00:30:06.754 ], 00:30:06.754 "driver_specific": {} 00:30:06.754 } 00:30:06.754 ] 00:30:06.754 11:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:06.754 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.755 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.013 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:07.013 "name": "Existed_Raid", 00:30:07.013 "uuid": "4c8b7fb9-6378-42fb-97ff-8019c7f62c87", 00:30:07.013 "strip_size_kb": 64, 00:30:07.013 "state": "online", 00:30:07.013 "raid_level": "raid0", 00:30:07.013 "superblock": false, 00:30:07.013 "num_base_bdevs": 4, 00:30:07.013 "num_base_bdevs_discovered": 4, 00:30:07.013 "num_base_bdevs_operational": 4, 00:30:07.013 "base_bdevs_list": [ 00:30:07.013 { 00:30:07.013 "name": "BaseBdev1", 00:30:07.013 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:07.013 "is_configured": true, 00:30:07.013 "data_offset": 0, 00:30:07.013 "data_size": 65536 00:30:07.013 }, 00:30:07.013 { 00:30:07.013 "name": "BaseBdev2", 00:30:07.013 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:07.013 "is_configured": true, 00:30:07.013 "data_offset": 0, 00:30:07.013 "data_size": 65536 00:30:07.013 }, 00:30:07.013 { 00:30:07.013 "name": "BaseBdev3", 00:30:07.013 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:07.013 "is_configured": true, 00:30:07.013 "data_offset": 0, 00:30:07.013 "data_size": 65536 00:30:07.013 }, 00:30:07.013 { 00:30:07.013 "name": "BaseBdev4", 00:30:07.013 "uuid": "154455fa-f679-4be0-a2b7-c7a0b36b84e3", 00:30:07.013 "is_configured": true, 00:30:07.013 "data_offset": 0, 00:30:07.013 "data_size": 65536 00:30:07.013 } 00:30:07.013 ] 00:30:07.013 }' 00:30:07.013 11:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:07.013 11:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:07.609 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:30:07.868 [2024-05-15 11:23:26.393609] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:30:07.868 "name": "Existed_Raid", 00:30:07.868 "aliases": [ 00:30:07.868 "4c8b7fb9-6378-42fb-97ff-8019c7f62c87" 00:30:07.868 ], 00:30:07.868 "product_name": "Raid Volume", 00:30:07.868 "block_size": 512, 00:30:07.868 "num_blocks": 262144, 00:30:07.868 "uuid": "4c8b7fb9-6378-42fb-97ff-8019c7f62c87", 00:30:07.868 "assigned_rate_limits": { 00:30:07.868 "rw_ios_per_sec": 0, 00:30:07.868 "rw_mbytes_per_sec": 0, 00:30:07.868 "r_mbytes_per_sec": 0, 00:30:07.868 "w_mbytes_per_sec": 0 00:30:07.868 }, 00:30:07.868 "claimed": false, 00:30:07.868 "zoned": false, 00:30:07.868 "supported_io_types": { 00:30:07.868 "read": true, 00:30:07.868 "write": true, 00:30:07.868 "unmap": true, 00:30:07.868 "write_zeroes": true, 00:30:07.868 "flush": true, 00:30:07.868 "reset": true, 00:30:07.868 "compare": false, 00:30:07.868 "compare_and_write": false, 00:30:07.868 "abort": false, 00:30:07.868 "nvme_admin": false, 00:30:07.868 "nvme_io": false 00:30:07.868 }, 00:30:07.868 "memory_domains": [ 00:30:07.868 { 00:30:07.868 "dma_device_id": "system", 00:30:07.868 "dma_device_type": 1 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.868 "dma_device_type": 2 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "system", 00:30:07.868 "dma_device_type": 1 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.868 "dma_device_type": 2 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "system", 00:30:07.868 "dma_device_type": 1 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.868 "dma_device_type": 2 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "system", 00:30:07.868 "dma_device_type": 1 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.868 "dma_device_type": 2 00:30:07.868 } 00:30:07.868 ], 00:30:07.868 "driver_specific": { 00:30:07.868 "raid": { 00:30:07.868 "uuid": "4c8b7fb9-6378-42fb-97ff-8019c7f62c87", 00:30:07.868 "strip_size_kb": 64, 00:30:07.868 "state": "online", 00:30:07.868 "raid_level": "raid0", 00:30:07.868 "superblock": false, 00:30:07.868 "num_base_bdevs": 4, 00:30:07.868 "num_base_bdevs_discovered": 4, 00:30:07.868 "num_base_bdevs_operational": 4, 00:30:07.868 "base_bdevs_list": [ 00:30:07.868 { 00:30:07.868 "name": "BaseBdev1", 00:30:07.868 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:07.868 "is_configured": true, 00:30:07.868 "data_offset": 0, 00:30:07.868 "data_size": 65536 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "name": "BaseBdev2", 00:30:07.868 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:07.868 "is_configured": true, 00:30:07.868 "data_offset": 0, 00:30:07.868 "data_size": 65536 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "name": "BaseBdev3", 00:30:07.868 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:07.868 "is_configured": true, 00:30:07.868 "data_offset": 0, 00:30:07.868 "data_size": 65536 00:30:07.868 }, 00:30:07.868 { 00:30:07.868 "name": "BaseBdev4", 00:30:07.868 "uuid": "154455fa-f679-4be0-a2b7-c7a0b36b84e3", 00:30:07.868 "is_configured": true, 00:30:07.868 "data_offset": 0, 00:30:07.868 "data_size": 65536 00:30:07.868 } 00:30:07.868 ] 00:30:07.868 } 00:30:07.868 } 00:30:07.868 }' 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:30:07.868 BaseBdev2 00:30:07.868 BaseBdev3 00:30:07.868 BaseBdev4' 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:07.868 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:08.126 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:08.127 "name": "BaseBdev1", 00:30:08.127 "aliases": [ 00:30:08.127 "57d27982-3f22-4928-9b67-758996034153" 00:30:08.127 ], 00:30:08.127 "product_name": "Malloc disk", 00:30:08.127 "block_size": 512, 00:30:08.127 "num_blocks": 65536, 00:30:08.127 "uuid": "57d27982-3f22-4928-9b67-758996034153", 00:30:08.127 "assigned_rate_limits": { 00:30:08.127 "rw_ios_per_sec": 0, 00:30:08.127 "rw_mbytes_per_sec": 0, 00:30:08.127 "r_mbytes_per_sec": 0, 00:30:08.127 "w_mbytes_per_sec": 0 00:30:08.127 }, 00:30:08.127 "claimed": true, 00:30:08.127 "claim_type": "exclusive_write", 00:30:08.127 "zoned": false, 00:30:08.127 "supported_io_types": { 00:30:08.127 "read": true, 00:30:08.127 "write": true, 00:30:08.127 "unmap": true, 00:30:08.127 "write_zeroes": true, 00:30:08.127 "flush": true, 00:30:08.127 "reset": true, 00:30:08.127 "compare": false, 00:30:08.127 "compare_and_write": false, 00:30:08.127 "abort": true, 00:30:08.127 "nvme_admin": false, 00:30:08.127 "nvme_io": false 00:30:08.127 }, 00:30:08.127 "memory_domains": [ 00:30:08.127 { 00:30:08.127 "dma_device_id": "system", 00:30:08.127 "dma_device_type": 1 00:30:08.127 }, 00:30:08.127 { 00:30:08.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.127 "dma_device_type": 2 00:30:08.127 } 00:30:08.127 ], 00:30:08.127 "driver_specific": {} 00:30:08.127 }' 00:30:08.127 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:08.127 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:08.386 11:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:08.386 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:08.386 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:08.645 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:08.645 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:08.645 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:08.645 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:08.645 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:08.903 "name": "BaseBdev2", 00:30:08.903 "aliases": [ 00:30:08.903 "9ee96117-c8ec-4038-887a-d54c0772032d" 00:30:08.903 ], 00:30:08.903 "product_name": "Malloc disk", 00:30:08.903 "block_size": 512, 00:30:08.903 "num_blocks": 65536, 00:30:08.903 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:08.903 "assigned_rate_limits": { 00:30:08.903 "rw_ios_per_sec": 0, 00:30:08.903 "rw_mbytes_per_sec": 0, 00:30:08.903 "r_mbytes_per_sec": 0, 00:30:08.903 "w_mbytes_per_sec": 0 00:30:08.903 }, 00:30:08.903 "claimed": true, 00:30:08.903 "claim_type": "exclusive_write", 00:30:08.903 "zoned": false, 00:30:08.903 "supported_io_types": { 00:30:08.903 "read": true, 00:30:08.903 "write": true, 00:30:08.903 "unmap": true, 00:30:08.903 "write_zeroes": true, 00:30:08.903 "flush": true, 00:30:08.903 "reset": true, 00:30:08.903 "compare": false, 00:30:08.903 "compare_and_write": false, 00:30:08.903 "abort": true, 00:30:08.903 "nvme_admin": false, 00:30:08.903 "nvme_io": false 00:30:08.903 }, 00:30:08.903 "memory_domains": [ 00:30:08.903 { 00:30:08.903 "dma_device_id": "system", 00:30:08.903 "dma_device_type": 1 00:30:08.903 }, 00:30:08.903 { 00:30:08.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.903 "dma_device_type": 2 00:30:08.903 } 00:30:08.903 ], 00:30:08.903 "driver_specific": {} 00:30:08.903 }' 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:08.903 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:09.162 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:09.422 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:09.422 "name": "BaseBdev3", 00:30:09.422 "aliases": [ 00:30:09.422 "4c85e6bf-5070-4764-acec-6a250e221c8e" 00:30:09.422 ], 00:30:09.422 "product_name": "Malloc disk", 00:30:09.422 "block_size": 512, 00:30:09.422 "num_blocks": 65536, 00:30:09.422 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:09.422 "assigned_rate_limits": { 00:30:09.422 "rw_ios_per_sec": 0, 00:30:09.422 "rw_mbytes_per_sec": 0, 00:30:09.422 "r_mbytes_per_sec": 0, 00:30:09.422 "w_mbytes_per_sec": 0 00:30:09.422 }, 00:30:09.422 "claimed": true, 00:30:09.422 "claim_type": "exclusive_write", 00:30:09.422 "zoned": false, 00:30:09.422 "supported_io_types": { 00:30:09.422 "read": true, 00:30:09.422 "write": true, 00:30:09.422 "unmap": true, 00:30:09.422 "write_zeroes": true, 00:30:09.422 "flush": true, 00:30:09.422 "reset": true, 00:30:09.422 "compare": false, 00:30:09.422 "compare_and_write": false, 00:30:09.422 "abort": true, 00:30:09.422 "nvme_admin": false, 00:30:09.422 "nvme_io": false 00:30:09.422 }, 00:30:09.422 "memory_domains": [ 00:30:09.422 { 00:30:09.422 "dma_device_id": "system", 00:30:09.422 "dma_device_type": 1 00:30:09.422 }, 00:30:09.422 { 00:30:09.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.422 "dma_device_type": 2 00:30:09.422 } 00:30:09.422 ], 00:30:09.422 "driver_specific": {} 00:30:09.422 }' 00:30:09.422 11:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:09.422 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:09.680 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:09.939 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:10.198 "name": "BaseBdev4", 00:30:10.198 "aliases": [ 00:30:10.198 "154455fa-f679-4be0-a2b7-c7a0b36b84e3" 00:30:10.198 ], 00:30:10.198 "product_name": "Malloc disk", 00:30:10.198 "block_size": 512, 00:30:10.198 "num_blocks": 65536, 00:30:10.198 "uuid": "154455fa-f679-4be0-a2b7-c7a0b36b84e3", 00:30:10.198 "assigned_rate_limits": { 00:30:10.198 "rw_ios_per_sec": 0, 00:30:10.198 "rw_mbytes_per_sec": 0, 00:30:10.198 "r_mbytes_per_sec": 0, 00:30:10.198 "w_mbytes_per_sec": 0 00:30:10.198 }, 00:30:10.198 "claimed": true, 00:30:10.198 "claim_type": "exclusive_write", 00:30:10.198 "zoned": false, 00:30:10.198 "supported_io_types": { 00:30:10.198 "read": true, 00:30:10.198 "write": true, 00:30:10.198 "unmap": true, 00:30:10.198 "write_zeroes": true, 00:30:10.198 "flush": true, 00:30:10.198 "reset": true, 00:30:10.198 "compare": false, 00:30:10.198 "compare_and_write": false, 00:30:10.198 "abort": true, 00:30:10.198 "nvme_admin": false, 00:30:10.198 "nvme_io": false 00:30:10.198 }, 00:30:10.198 "memory_domains": [ 00:30:10.198 { 00:30:10.198 "dma_device_id": "system", 00:30:10.198 "dma_device_type": 1 00:30:10.198 }, 00:30:10.198 { 00:30:10.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:10.198 "dma_device_type": 2 00:30:10.198 } 00:30:10.198 ], 00:30:10.198 "driver_specific": {} 00:30:10.198 }' 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:10.198 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:10.473 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:10.473 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:10.473 11:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:10.474 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:10.474 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:10.474 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:10.474 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:10.474 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:10.732 [2024-05-15 11:23:29.286109] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:10.732 [2024-05-15 11:23:29.286153] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:10.732 [2024-05-15 11:23:29.286203] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.990 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.247 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:11.247 "name": "Existed_Raid", 00:30:11.247 "uuid": "4c8b7fb9-6378-42fb-97ff-8019c7f62c87", 00:30:11.247 "strip_size_kb": 64, 00:30:11.247 "state": "offline", 00:30:11.247 "raid_level": "raid0", 00:30:11.247 "superblock": false, 00:30:11.247 "num_base_bdevs": 4, 00:30:11.247 "num_base_bdevs_discovered": 3, 00:30:11.247 "num_base_bdevs_operational": 3, 00:30:11.247 "base_bdevs_list": [ 00:30:11.247 { 00:30:11.247 "name": null, 00:30:11.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.247 "is_configured": false, 00:30:11.247 "data_offset": 0, 00:30:11.247 "data_size": 65536 00:30:11.247 }, 00:30:11.247 { 00:30:11.247 "name": "BaseBdev2", 00:30:11.247 "uuid": "9ee96117-c8ec-4038-887a-d54c0772032d", 00:30:11.247 "is_configured": true, 00:30:11.247 "data_offset": 0, 00:30:11.247 "data_size": 65536 00:30:11.247 }, 00:30:11.247 { 00:30:11.247 "name": "BaseBdev3", 00:30:11.247 "uuid": "4c85e6bf-5070-4764-acec-6a250e221c8e", 00:30:11.247 "is_configured": true, 00:30:11.247 "data_offset": 0, 00:30:11.247 "data_size": 65536 00:30:11.247 }, 00:30:11.247 { 00:30:11.247 "name": "BaseBdev4", 00:30:11.247 "uuid": "154455fa-f679-4be0-a2b7-c7a0b36b84e3", 00:30:11.247 "is_configured": true, 00:30:11.247 "data_offset": 0, 00:30:11.248 "data_size": 65536 00:30:11.248 } 00:30:11.248 ] 00:30:11.248 }' 00:30:11.248 11:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:11.248 11:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:11.812 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:11.812 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:11.812 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.069 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:12.069 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:12.069 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:12.327 [2024-05-15 11:23:30.794430] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:12.327 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:12.327 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:12.327 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.327 11:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:12.585 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:12.585 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:12.585 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:12.843 [2024-05-15 11:23:31.371604] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:12.843 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:12.843 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:12.843 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:12.843 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.102 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:13.102 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:13.102 11:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:30:13.360 [2024-05-15 11:23:31.900541] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:13.360 [2024-05-15 11:23:31.900602] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:13.619 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:13.877 BaseBdev2 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:13.877 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:14.182 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:14.440 [ 00:30:14.440 { 00:30:14.440 "name": "BaseBdev2", 00:30:14.440 "aliases": [ 00:30:14.440 "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599" 00:30:14.440 ], 00:30:14.440 "product_name": "Malloc disk", 00:30:14.440 "block_size": 512, 00:30:14.440 "num_blocks": 65536, 00:30:14.440 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:14.440 "assigned_rate_limits": { 00:30:14.440 "rw_ios_per_sec": 0, 00:30:14.440 "rw_mbytes_per_sec": 0, 00:30:14.440 "r_mbytes_per_sec": 0, 00:30:14.440 "w_mbytes_per_sec": 0 00:30:14.440 }, 00:30:14.440 "claimed": false, 00:30:14.440 "zoned": false, 00:30:14.440 "supported_io_types": { 00:30:14.440 "read": true, 00:30:14.440 "write": true, 00:30:14.440 "unmap": true, 00:30:14.440 "write_zeroes": true, 00:30:14.440 "flush": true, 00:30:14.440 "reset": true, 00:30:14.440 "compare": false, 00:30:14.440 "compare_and_write": false, 00:30:14.440 "abort": true, 00:30:14.440 "nvme_admin": false, 00:30:14.440 "nvme_io": false 00:30:14.440 }, 00:30:14.440 "memory_domains": [ 00:30:14.440 { 00:30:14.440 "dma_device_id": "system", 00:30:14.440 "dma_device_type": 1 00:30:14.440 }, 00:30:14.440 { 00:30:14.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.440 "dma_device_type": 2 00:30:14.440 } 00:30:14.441 ], 00:30:14.441 "driver_specific": {} 00:30:14.441 } 00:30:14.441 ] 00:30:14.441 11:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:14.441 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:14.441 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:14.441 11:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:14.699 BaseBdev3 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:14.699 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:14.957 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:14.957 [ 00:30:14.957 { 00:30:14.957 "name": "BaseBdev3", 00:30:14.957 "aliases": [ 00:30:14.957 "82496407-f528-4ab5-9f51-a0aec9084823" 00:30:14.957 ], 00:30:14.957 "product_name": "Malloc disk", 00:30:14.957 "block_size": 512, 00:30:14.957 "num_blocks": 65536, 00:30:14.957 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:14.957 "assigned_rate_limits": { 00:30:14.957 "rw_ios_per_sec": 0, 00:30:14.957 "rw_mbytes_per_sec": 0, 00:30:14.957 "r_mbytes_per_sec": 0, 00:30:14.957 "w_mbytes_per_sec": 0 00:30:14.957 }, 00:30:14.957 "claimed": false, 00:30:14.957 "zoned": false, 00:30:14.957 "supported_io_types": { 00:30:14.957 "read": true, 00:30:14.957 "write": true, 00:30:14.957 "unmap": true, 00:30:14.957 "write_zeroes": true, 00:30:14.957 "flush": true, 00:30:14.957 "reset": true, 00:30:14.957 "compare": false, 00:30:14.957 "compare_and_write": false, 00:30:14.957 "abort": true, 00:30:14.957 "nvme_admin": false, 00:30:14.957 "nvme_io": false 00:30:14.957 }, 00:30:14.957 "memory_domains": [ 00:30:14.957 { 00:30:14.957 "dma_device_id": "system", 00:30:14.957 "dma_device_type": 1 00:30:14.957 }, 00:30:14.957 { 00:30:14.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.957 "dma_device_type": 2 00:30:14.957 } 00:30:14.957 ], 00:30:14.957 "driver_specific": {} 00:30:14.957 } 00:30:14.957 ] 00:30:14.957 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:14.957 11:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:14.957 11:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:14.957 11:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:15.216 BaseBdev4 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:15.216 11:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:15.474 11:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:15.733 [ 00:30:15.733 { 00:30:15.733 "name": "BaseBdev4", 00:30:15.733 "aliases": [ 00:30:15.733 "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d" 00:30:15.733 ], 00:30:15.733 "product_name": "Malloc disk", 00:30:15.733 "block_size": 512, 00:30:15.733 "num_blocks": 65536, 00:30:15.733 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:15.733 "assigned_rate_limits": { 00:30:15.733 "rw_ios_per_sec": 0, 00:30:15.733 "rw_mbytes_per_sec": 0, 00:30:15.733 "r_mbytes_per_sec": 0, 00:30:15.733 "w_mbytes_per_sec": 0 00:30:15.733 }, 00:30:15.733 "claimed": false, 00:30:15.733 "zoned": false, 00:30:15.733 "supported_io_types": { 00:30:15.733 "read": true, 00:30:15.733 "write": true, 00:30:15.733 "unmap": true, 00:30:15.733 "write_zeroes": true, 00:30:15.733 "flush": true, 00:30:15.733 "reset": true, 00:30:15.733 "compare": false, 00:30:15.733 "compare_and_write": false, 00:30:15.733 "abort": true, 00:30:15.733 "nvme_admin": false, 00:30:15.733 "nvme_io": false 00:30:15.733 }, 00:30:15.733 "memory_domains": [ 00:30:15.733 { 00:30:15.733 "dma_device_id": "system", 00:30:15.733 "dma_device_type": 1 00:30:15.733 }, 00:30:15.733 { 00:30:15.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.733 "dma_device_type": 2 00:30:15.733 } 00:30:15.733 ], 00:30:15.733 "driver_specific": {} 00:30:15.733 } 00:30:15.733 ] 00:30:15.733 11:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:15.733 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:15.733 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:15.733 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:15.991 [2024-05-15 11:23:34.506698] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:15.991 [2024-05-15 11:23:34.506799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:15.991 [2024-05-15 11:23:34.507120] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:15.991 [2024-05-15 11:23:34.509025] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:15.991 [2024-05-15 11:23:34.509078] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.991 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:16.249 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:16.249 "name": "Existed_Raid", 00:30:16.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.249 "strip_size_kb": 64, 00:30:16.249 "state": "configuring", 00:30:16.249 "raid_level": "raid0", 00:30:16.249 "superblock": false, 00:30:16.249 "num_base_bdevs": 4, 00:30:16.249 "num_base_bdevs_discovered": 3, 00:30:16.249 "num_base_bdevs_operational": 4, 00:30:16.249 "base_bdevs_list": [ 00:30:16.249 { 00:30:16.249 "name": "BaseBdev1", 00:30:16.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.249 "is_configured": false, 00:30:16.249 "data_offset": 0, 00:30:16.249 "data_size": 0 00:30:16.249 }, 00:30:16.250 { 00:30:16.250 "name": "BaseBdev2", 00:30:16.250 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:16.250 "is_configured": true, 00:30:16.250 "data_offset": 0, 00:30:16.250 "data_size": 65536 00:30:16.250 }, 00:30:16.250 { 00:30:16.250 "name": "BaseBdev3", 00:30:16.250 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:16.250 "is_configured": true, 00:30:16.250 "data_offset": 0, 00:30:16.250 "data_size": 65536 00:30:16.250 }, 00:30:16.250 { 00:30:16.250 "name": "BaseBdev4", 00:30:16.250 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:16.250 "is_configured": true, 00:30:16.250 "data_offset": 0, 00:30:16.250 "data_size": 65536 00:30:16.250 } 00:30:16.250 ] 00:30:16.250 }' 00:30:16.250 11:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:16.250 11:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:17.184 [2024-05-15 11:23:35.738788] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:17.184 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.185 11:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:17.443 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:17.443 "name": "Existed_Raid", 00:30:17.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.443 "strip_size_kb": 64, 00:30:17.443 "state": "configuring", 00:30:17.443 "raid_level": "raid0", 00:30:17.443 "superblock": false, 00:30:17.443 "num_base_bdevs": 4, 00:30:17.443 "num_base_bdevs_discovered": 2, 00:30:17.443 "num_base_bdevs_operational": 4, 00:30:17.443 "base_bdevs_list": [ 00:30:17.443 { 00:30:17.443 "name": "BaseBdev1", 00:30:17.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.443 "is_configured": false, 00:30:17.443 "data_offset": 0, 00:30:17.443 "data_size": 0 00:30:17.443 }, 00:30:17.443 { 00:30:17.443 "name": null, 00:30:17.443 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:17.443 "is_configured": false, 00:30:17.443 "data_offset": 0, 00:30:17.443 "data_size": 65536 00:30:17.443 }, 00:30:17.443 { 00:30:17.443 "name": "BaseBdev3", 00:30:17.443 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:17.443 "is_configured": true, 00:30:17.443 "data_offset": 0, 00:30:17.443 "data_size": 65536 00:30:17.443 }, 00:30:17.443 { 00:30:17.443 "name": "BaseBdev4", 00:30:17.443 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:17.444 "is_configured": true, 00:30:17.444 "data_offset": 0, 00:30:17.444 "data_size": 65536 00:30:17.444 } 00:30:17.444 ] 00:30:17.444 }' 00:30:17.444 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:17.444 11:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.380 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.380 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:18.380 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:30:18.380 11:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:18.638 BaseBdev1 00:30:18.638 [2024-05-15 11:23:37.176014] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:18.638 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:18.897 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:19.156 [ 00:30:19.156 { 00:30:19.156 "name": "BaseBdev1", 00:30:19.156 "aliases": [ 00:30:19.156 "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d" 00:30:19.156 ], 00:30:19.156 "product_name": "Malloc disk", 00:30:19.156 "block_size": 512, 00:30:19.156 "num_blocks": 65536, 00:30:19.156 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:19.156 "assigned_rate_limits": { 00:30:19.156 "rw_ios_per_sec": 0, 00:30:19.156 "rw_mbytes_per_sec": 0, 00:30:19.156 "r_mbytes_per_sec": 0, 00:30:19.156 "w_mbytes_per_sec": 0 00:30:19.156 }, 00:30:19.156 "claimed": true, 00:30:19.156 "claim_type": "exclusive_write", 00:30:19.156 "zoned": false, 00:30:19.156 "supported_io_types": { 00:30:19.156 "read": true, 00:30:19.156 "write": true, 00:30:19.156 "unmap": true, 00:30:19.156 "write_zeroes": true, 00:30:19.156 "flush": true, 00:30:19.156 "reset": true, 00:30:19.156 "compare": false, 00:30:19.156 "compare_and_write": false, 00:30:19.156 "abort": true, 00:30:19.156 "nvme_admin": false, 00:30:19.156 "nvme_io": false 00:30:19.156 }, 00:30:19.156 "memory_domains": [ 00:30:19.156 { 00:30:19.156 "dma_device_id": "system", 00:30:19.156 "dma_device_type": 1 00:30:19.156 }, 00:30:19.156 { 00:30:19.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:19.156 "dma_device_type": 2 00:30:19.156 } 00:30:19.156 ], 00:30:19.156 "driver_specific": {} 00:30:19.156 } 00:30:19.156 ] 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:19.156 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.415 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:19.415 "name": "Existed_Raid", 00:30:19.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.415 "strip_size_kb": 64, 00:30:19.415 "state": "configuring", 00:30:19.415 "raid_level": "raid0", 00:30:19.415 "superblock": false, 00:30:19.415 "num_base_bdevs": 4, 00:30:19.415 "num_base_bdevs_discovered": 3, 00:30:19.415 "num_base_bdevs_operational": 4, 00:30:19.415 "base_bdevs_list": [ 00:30:19.415 { 00:30:19.415 "name": "BaseBdev1", 00:30:19.415 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:19.415 "is_configured": true, 00:30:19.415 "data_offset": 0, 00:30:19.415 "data_size": 65536 00:30:19.415 }, 00:30:19.415 { 00:30:19.415 "name": null, 00:30:19.415 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:19.415 "is_configured": false, 00:30:19.415 "data_offset": 0, 00:30:19.415 "data_size": 65536 00:30:19.415 }, 00:30:19.415 { 00:30:19.415 "name": "BaseBdev3", 00:30:19.415 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:19.415 "is_configured": true, 00:30:19.415 "data_offset": 0, 00:30:19.415 "data_size": 65536 00:30:19.415 }, 00:30:19.415 { 00:30:19.415 "name": "BaseBdev4", 00:30:19.415 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:19.415 "is_configured": true, 00:30:19.415 "data_offset": 0, 00:30:19.415 "data_size": 65536 00:30:19.415 } 00:30:19.415 ] 00:30:19.415 }' 00:30:19.415 11:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:19.415 11:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.982 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.982 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:20.240 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:20.240 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:20.499 [2024-05-15 11:23:38.948517] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:20.499 11:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.758 11:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:20.758 "name": "Existed_Raid", 00:30:20.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.759 "strip_size_kb": 64, 00:30:20.759 "state": "configuring", 00:30:20.759 "raid_level": "raid0", 00:30:20.759 "superblock": false, 00:30:20.759 "num_base_bdevs": 4, 00:30:20.759 "num_base_bdevs_discovered": 2, 00:30:20.759 "num_base_bdevs_operational": 4, 00:30:20.759 "base_bdevs_list": [ 00:30:20.759 { 00:30:20.759 "name": "BaseBdev1", 00:30:20.759 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:20.759 "is_configured": true, 00:30:20.759 "data_offset": 0, 00:30:20.759 "data_size": 65536 00:30:20.759 }, 00:30:20.759 { 00:30:20.759 "name": null, 00:30:20.759 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:20.759 "is_configured": false, 00:30:20.759 "data_offset": 0, 00:30:20.759 "data_size": 65536 00:30:20.759 }, 00:30:20.759 { 00:30:20.759 "name": null, 00:30:20.759 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:20.759 "is_configured": false, 00:30:20.759 "data_offset": 0, 00:30:20.759 "data_size": 65536 00:30:20.759 }, 00:30:20.759 { 00:30:20.759 "name": "BaseBdev4", 00:30:20.759 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:20.759 "is_configured": true, 00:30:20.759 "data_offset": 0, 00:30:20.759 "data_size": 65536 00:30:20.759 } 00:30:20.759 ] 00:30:20.759 }' 00:30:20.759 11:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:20.759 11:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.323 11:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.323 11:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:21.581 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:30:21.581 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:21.839 [2024-05-15 11:23:40.348876] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:21.839 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.098 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:22.098 "name": "Existed_Raid", 00:30:22.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.098 "strip_size_kb": 64, 00:30:22.098 "state": "configuring", 00:30:22.098 "raid_level": "raid0", 00:30:22.098 "superblock": false, 00:30:22.098 "num_base_bdevs": 4, 00:30:22.098 "num_base_bdevs_discovered": 3, 00:30:22.098 "num_base_bdevs_operational": 4, 00:30:22.098 "base_bdevs_list": [ 00:30:22.098 { 00:30:22.098 "name": "BaseBdev1", 00:30:22.098 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:22.098 "is_configured": true, 00:30:22.098 "data_offset": 0, 00:30:22.098 "data_size": 65536 00:30:22.098 }, 00:30:22.098 { 00:30:22.098 "name": null, 00:30:22.098 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:22.098 "is_configured": false, 00:30:22.098 "data_offset": 0, 00:30:22.098 "data_size": 65536 00:30:22.098 }, 00:30:22.098 { 00:30:22.098 "name": "BaseBdev3", 00:30:22.098 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:22.098 "is_configured": true, 00:30:22.098 "data_offset": 0, 00:30:22.098 "data_size": 65536 00:30:22.098 }, 00:30:22.098 { 00:30:22.098 "name": "BaseBdev4", 00:30:22.098 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:22.098 "is_configured": true, 00:30:22.098 "data_offset": 0, 00:30:22.098 "data_size": 65536 00:30:22.098 } 00:30:22.098 ] 00:30:22.098 }' 00:30:22.098 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:22.098 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.665 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.665 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:22.923 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:30:22.923 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:23.182 [2024-05-15 11:23:41.697124] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.182 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:23.440 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:23.440 "name": "Existed_Raid", 00:30:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.440 "strip_size_kb": 64, 00:30:23.440 "state": "configuring", 00:30:23.440 "raid_level": "raid0", 00:30:23.440 "superblock": false, 00:30:23.440 "num_base_bdevs": 4, 00:30:23.440 "num_base_bdevs_discovered": 2, 00:30:23.440 "num_base_bdevs_operational": 4, 00:30:23.440 "base_bdevs_list": [ 00:30:23.440 { 00:30:23.440 "name": null, 00:30:23.440 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:23.440 "is_configured": false, 00:30:23.440 "data_offset": 0, 00:30:23.440 "data_size": 65536 00:30:23.440 }, 00:30:23.440 { 00:30:23.440 "name": null, 00:30:23.440 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:23.440 "is_configured": false, 00:30:23.440 "data_offset": 0, 00:30:23.440 "data_size": 65536 00:30:23.440 }, 00:30:23.440 { 00:30:23.440 "name": "BaseBdev3", 00:30:23.440 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:23.440 "is_configured": true, 00:30:23.440 "data_offset": 0, 00:30:23.440 "data_size": 65536 00:30:23.440 }, 00:30:23.440 { 00:30:23.440 "name": "BaseBdev4", 00:30:23.440 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:23.440 "is_configured": true, 00:30:23.440 "data_offset": 0, 00:30:23.440 "data_size": 65536 00:30:23.440 } 00:30:23.440 ] 00:30:23.440 }' 00:30:23.440 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:23.440 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.375 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.375 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:24.375 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:30:24.375 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:24.633 [2024-05-15 11:23:43.018794] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.633 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.892 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:24.892 "name": "Existed_Raid", 00:30:24.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.892 "strip_size_kb": 64, 00:30:24.892 "state": "configuring", 00:30:24.892 "raid_level": "raid0", 00:30:24.892 "superblock": false, 00:30:24.892 "num_base_bdevs": 4, 00:30:24.892 "num_base_bdevs_discovered": 3, 00:30:24.892 "num_base_bdevs_operational": 4, 00:30:24.892 "base_bdevs_list": [ 00:30:24.892 { 00:30:24.892 "name": null, 00:30:24.892 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:24.892 "is_configured": false, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": "BaseBdev2", 00:30:24.892 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:24.892 "is_configured": true, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": "BaseBdev3", 00:30:24.892 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:24.892 "is_configured": true, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": "BaseBdev4", 00:30:24.892 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:24.892 "is_configured": true, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 } 00:30:24.892 ] 00:30:24.892 }' 00:30:24.892 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:24.892 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.516 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.516 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:25.516 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:30:25.516 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.516 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:25.773 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d 00:30:26.032 [2024-05-15 11:23:44.523324] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:26.032 [2024-05-15 11:23:44.523372] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:30:26.032 [2024-05-15 11:23:44.523382] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:30:26.032 [2024-05-15 11:23:44.523556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:26.032 [2024-05-15 11:23:44.523783] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:30:26.032 [2024-05-15 11:23:44.523799] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:30:26.032 NewBaseBdev 00:30:26.032 [2024-05-15 11:23:44.524209] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:26.032 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:26.291 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:26.550 [ 00:30:26.550 { 00:30:26.550 "name": "NewBaseBdev", 00:30:26.550 "aliases": [ 00:30:26.550 "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d" 00:30:26.550 ], 00:30:26.550 "product_name": "Malloc disk", 00:30:26.550 "block_size": 512, 00:30:26.550 "num_blocks": 65536, 00:30:26.550 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:26.550 "assigned_rate_limits": { 00:30:26.550 "rw_ios_per_sec": 0, 00:30:26.550 "rw_mbytes_per_sec": 0, 00:30:26.550 "r_mbytes_per_sec": 0, 00:30:26.550 "w_mbytes_per_sec": 0 00:30:26.550 }, 00:30:26.550 "claimed": true, 00:30:26.550 "claim_type": "exclusive_write", 00:30:26.550 "zoned": false, 00:30:26.550 "supported_io_types": { 00:30:26.550 "read": true, 00:30:26.550 "write": true, 00:30:26.550 "unmap": true, 00:30:26.550 "write_zeroes": true, 00:30:26.550 "flush": true, 00:30:26.550 "reset": true, 00:30:26.550 "compare": false, 00:30:26.550 "compare_and_write": false, 00:30:26.550 "abort": true, 00:30:26.550 "nvme_admin": false, 00:30:26.550 "nvme_io": false 00:30:26.550 }, 00:30:26.550 "memory_domains": [ 00:30:26.550 { 00:30:26.550 "dma_device_id": "system", 00:30:26.550 "dma_device_type": 1 00:30:26.550 }, 00:30:26.550 { 00:30:26.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.550 "dma_device_type": 2 00:30:26.550 } 00:30:26.550 ], 00:30:26.550 "driver_specific": {} 00:30:26.550 } 00:30:26.550 ] 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.550 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.809 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:26.809 "name": "Existed_Raid", 00:30:26.809 "uuid": "f20f0cc1-42e1-49c0-8d0c-233f9687e7a3", 00:30:26.809 "strip_size_kb": 64, 00:30:26.809 "state": "online", 00:30:26.809 "raid_level": "raid0", 00:30:26.809 "superblock": false, 00:30:26.809 "num_base_bdevs": 4, 00:30:26.809 "num_base_bdevs_discovered": 4, 00:30:26.809 "num_base_bdevs_operational": 4, 00:30:26.809 "base_bdevs_list": [ 00:30:26.809 { 00:30:26.809 "name": "NewBaseBdev", 00:30:26.809 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 0, 00:30:26.809 "data_size": 65536 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": "BaseBdev2", 00:30:26.809 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 0, 00:30:26.809 "data_size": 65536 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": "BaseBdev3", 00:30:26.809 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 0, 00:30:26.809 "data_size": 65536 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": "BaseBdev4", 00:30:26.809 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 0, 00:30:26.809 "data_size": 65536 00:30:26.809 } 00:30:26.809 ] 00:30:26.809 }' 00:30:26.809 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:26.809 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:30:27.376 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:27.634 [2024-05-15 11:23:46.039802] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:30:27.634 "name": "Existed_Raid", 00:30:27.634 "aliases": [ 00:30:27.634 "f20f0cc1-42e1-49c0-8d0c-233f9687e7a3" 00:30:27.634 ], 00:30:27.634 "product_name": "Raid Volume", 00:30:27.634 "block_size": 512, 00:30:27.634 "num_blocks": 262144, 00:30:27.634 "uuid": "f20f0cc1-42e1-49c0-8d0c-233f9687e7a3", 00:30:27.634 "assigned_rate_limits": { 00:30:27.634 "rw_ios_per_sec": 0, 00:30:27.634 "rw_mbytes_per_sec": 0, 00:30:27.634 "r_mbytes_per_sec": 0, 00:30:27.634 "w_mbytes_per_sec": 0 00:30:27.634 }, 00:30:27.634 "claimed": false, 00:30:27.634 "zoned": false, 00:30:27.634 "supported_io_types": { 00:30:27.634 "read": true, 00:30:27.634 "write": true, 00:30:27.634 "unmap": true, 00:30:27.634 "write_zeroes": true, 00:30:27.634 "flush": true, 00:30:27.634 "reset": true, 00:30:27.634 "compare": false, 00:30:27.634 "compare_and_write": false, 00:30:27.634 "abort": false, 00:30:27.634 "nvme_admin": false, 00:30:27.634 "nvme_io": false 00:30:27.634 }, 00:30:27.634 "memory_domains": [ 00:30:27.634 { 00:30:27.634 "dma_device_id": "system", 00:30:27.634 "dma_device_type": 1 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.634 "dma_device_type": 2 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "system", 00:30:27.634 "dma_device_type": 1 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.634 "dma_device_type": 2 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "system", 00:30:27.634 "dma_device_type": 1 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.634 "dma_device_type": 2 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "system", 00:30:27.634 "dma_device_type": 1 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.634 "dma_device_type": 2 00:30:27.634 } 00:30:27.634 ], 00:30:27.634 "driver_specific": { 00:30:27.634 "raid": { 00:30:27.634 "uuid": "f20f0cc1-42e1-49c0-8d0c-233f9687e7a3", 00:30:27.634 "strip_size_kb": 64, 00:30:27.634 "state": "online", 00:30:27.634 "raid_level": "raid0", 00:30:27.634 "superblock": false, 00:30:27.634 "num_base_bdevs": 4, 00:30:27.634 "num_base_bdevs_discovered": 4, 00:30:27.634 "num_base_bdevs_operational": 4, 00:30:27.634 "base_bdevs_list": [ 00:30:27.634 { 00:30:27.634 "name": "NewBaseBdev", 00:30:27.634 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:27.634 "is_configured": true, 00:30:27.634 "data_offset": 0, 00:30:27.634 "data_size": 65536 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "name": "BaseBdev2", 00:30:27.634 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:27.634 "is_configured": true, 00:30:27.634 "data_offset": 0, 00:30:27.634 "data_size": 65536 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "name": "BaseBdev3", 00:30:27.634 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:27.634 "is_configured": true, 00:30:27.634 "data_offset": 0, 00:30:27.634 "data_size": 65536 00:30:27.634 }, 00:30:27.634 { 00:30:27.634 "name": "BaseBdev4", 00:30:27.634 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:27.634 "is_configured": true, 00:30:27.634 "data_offset": 0, 00:30:27.634 "data_size": 65536 00:30:27.634 } 00:30:27.634 ] 00:30:27.634 } 00:30:27.634 } 00:30:27.634 }' 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:30:27.634 BaseBdev2 00:30:27.634 BaseBdev3 00:30:27.634 BaseBdev4' 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:27.634 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:27.892 "name": "NewBaseBdev", 00:30:27.892 "aliases": [ 00:30:27.892 "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d" 00:30:27.892 ], 00:30:27.892 "product_name": "Malloc disk", 00:30:27.892 "block_size": 512, 00:30:27.892 "num_blocks": 65536, 00:30:27.892 "uuid": "0c5bc5c0-df77-4ffa-a6f2-9589ff42f58d", 00:30:27.892 "assigned_rate_limits": { 00:30:27.892 "rw_ios_per_sec": 0, 00:30:27.892 "rw_mbytes_per_sec": 0, 00:30:27.892 "r_mbytes_per_sec": 0, 00:30:27.892 "w_mbytes_per_sec": 0 00:30:27.892 }, 00:30:27.892 "claimed": true, 00:30:27.892 "claim_type": "exclusive_write", 00:30:27.892 "zoned": false, 00:30:27.892 "supported_io_types": { 00:30:27.892 "read": true, 00:30:27.892 "write": true, 00:30:27.892 "unmap": true, 00:30:27.892 "write_zeroes": true, 00:30:27.892 "flush": true, 00:30:27.892 "reset": true, 00:30:27.892 "compare": false, 00:30:27.892 "compare_and_write": false, 00:30:27.892 "abort": true, 00:30:27.892 "nvme_admin": false, 00:30:27.892 "nvme_io": false 00:30:27.892 }, 00:30:27.892 "memory_domains": [ 00:30:27.892 { 00:30:27.892 "dma_device_id": "system", 00:30:27.892 "dma_device_type": 1 00:30:27.892 }, 00:30:27.892 { 00:30:27.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.892 "dma_device_type": 2 00:30:27.892 } 00:30:27.892 ], 00:30:27.892 "driver_specific": {} 00:30:27.892 }' 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:27.892 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:28.149 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:28.407 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:28.407 "name": "BaseBdev2", 00:30:28.407 "aliases": [ 00:30:28.407 "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599" 00:30:28.407 ], 00:30:28.407 "product_name": "Malloc disk", 00:30:28.407 "block_size": 512, 00:30:28.407 "num_blocks": 65536, 00:30:28.407 "uuid": "3f7076ae-0c99-4ae7-9fdb-0a1d2e8a9599", 00:30:28.407 "assigned_rate_limits": { 00:30:28.407 "rw_ios_per_sec": 0, 00:30:28.407 "rw_mbytes_per_sec": 0, 00:30:28.407 "r_mbytes_per_sec": 0, 00:30:28.407 "w_mbytes_per_sec": 0 00:30:28.407 }, 00:30:28.407 "claimed": true, 00:30:28.407 "claim_type": "exclusive_write", 00:30:28.407 "zoned": false, 00:30:28.407 "supported_io_types": { 00:30:28.407 "read": true, 00:30:28.407 "write": true, 00:30:28.407 "unmap": true, 00:30:28.407 "write_zeroes": true, 00:30:28.407 "flush": true, 00:30:28.407 "reset": true, 00:30:28.407 "compare": false, 00:30:28.407 "compare_and_write": false, 00:30:28.407 "abort": true, 00:30:28.407 "nvme_admin": false, 00:30:28.407 "nvme_io": false 00:30:28.407 }, 00:30:28.407 "memory_domains": [ 00:30:28.407 { 00:30:28.407 "dma_device_id": "system", 00:30:28.407 "dma_device_type": 1 00:30:28.407 }, 00:30:28.407 { 00:30:28.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.407 "dma_device_type": 2 00:30:28.407 } 00:30:28.407 ], 00:30:28.407 "driver_specific": {} 00:30:28.407 }' 00:30:28.407 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:28.665 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:28.924 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:29.182 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:29.182 "name": "BaseBdev3", 00:30:29.182 "aliases": [ 00:30:29.182 "82496407-f528-4ab5-9f51-a0aec9084823" 00:30:29.182 ], 00:30:29.182 "product_name": "Malloc disk", 00:30:29.182 "block_size": 512, 00:30:29.182 "num_blocks": 65536, 00:30:29.182 "uuid": "82496407-f528-4ab5-9f51-a0aec9084823", 00:30:29.182 "assigned_rate_limits": { 00:30:29.182 "rw_ios_per_sec": 0, 00:30:29.182 "rw_mbytes_per_sec": 0, 00:30:29.182 "r_mbytes_per_sec": 0, 00:30:29.182 "w_mbytes_per_sec": 0 00:30:29.182 }, 00:30:29.182 "claimed": true, 00:30:29.182 "claim_type": "exclusive_write", 00:30:29.182 "zoned": false, 00:30:29.182 "supported_io_types": { 00:30:29.182 "read": true, 00:30:29.182 "write": true, 00:30:29.182 "unmap": true, 00:30:29.182 "write_zeroes": true, 00:30:29.182 "flush": true, 00:30:29.182 "reset": true, 00:30:29.182 "compare": false, 00:30:29.182 "compare_and_write": false, 00:30:29.182 "abort": true, 00:30:29.182 "nvme_admin": false, 00:30:29.182 "nvme_io": false 00:30:29.182 }, 00:30:29.182 "memory_domains": [ 00:30:29.182 { 00:30:29.182 "dma_device_id": "system", 00:30:29.182 "dma_device_type": 1 00:30:29.182 }, 00:30:29.182 { 00:30:29.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.182 "dma_device_type": 2 00:30:29.182 } 00:30:29.182 ], 00:30:29.182 "driver_specific": {} 00:30:29.182 }' 00:30:29.182 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:29.182 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:29.182 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:29.182 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:29.441 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:29.441 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:29.441 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:29.441 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:29.441 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:29.441 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:29.441 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:29.699 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:29.699 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:29.699 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:29.699 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:29.957 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:29.957 "name": "BaseBdev4", 00:30:29.957 "aliases": [ 00:30:29.957 "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d" 00:30:29.957 ], 00:30:29.957 "product_name": "Malloc disk", 00:30:29.957 "block_size": 512, 00:30:29.957 "num_blocks": 65536, 00:30:29.957 "uuid": "576024dc-0688-4c9b-bb58-2eaa9c9b4f5d", 00:30:29.957 "assigned_rate_limits": { 00:30:29.957 "rw_ios_per_sec": 0, 00:30:29.957 "rw_mbytes_per_sec": 0, 00:30:29.957 "r_mbytes_per_sec": 0, 00:30:29.957 "w_mbytes_per_sec": 0 00:30:29.957 }, 00:30:29.957 "claimed": true, 00:30:29.958 "claim_type": "exclusive_write", 00:30:29.958 "zoned": false, 00:30:29.958 "supported_io_types": { 00:30:29.958 "read": true, 00:30:29.958 "write": true, 00:30:29.958 "unmap": true, 00:30:29.958 "write_zeroes": true, 00:30:29.958 "flush": true, 00:30:29.958 "reset": true, 00:30:29.958 "compare": false, 00:30:29.958 "compare_and_write": false, 00:30:29.958 "abort": true, 00:30:29.958 "nvme_admin": false, 00:30:29.958 "nvme_io": false 00:30:29.958 }, 00:30:29.958 "memory_domains": [ 00:30:29.958 { 00:30:29.958 "dma_device_id": "system", 00:30:29.958 "dma_device_type": 1 00:30:29.958 }, 00:30:29.958 { 00:30:29.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.958 "dma_device_type": 2 00:30:29.958 } 00:30:29.958 ], 00:30:29.958 "driver_specific": {} 00:30:29.958 }' 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:29.958 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:30.217 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:30.475 [2024-05-15 11:23:49.004070] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:30.475 [2024-05-15 11:23:49.004112] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:30.475 [2024-05-15 11:23:49.004179] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.475 [2024-05-15 11:23:49.004222] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.475 [2024-05-15 11:23:49.004233] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:30:30.475 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 64482 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 64482 ']' 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 64482 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64482 00:30:30.476 killing process with pid 64482 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64482' 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 64482 00:30:30.476 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 64482 00:30:30.476 [2024-05-15 11:23:49.052596] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:31.043 [2024-05-15 11:23:49.390130] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:31.981 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:30:31.981 00:30:31.981 real 0m34.588s 00:30:31.981 user 1m5.213s 00:30:31.981 sys 0m3.496s 00:30:31.981 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:31.981 ************************************ 00:30:31.981 END TEST raid_state_function_test 00:30:31.981 ************************************ 00:30:31.981 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.240 11:23:50 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:30:32.240 11:23:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:32.240 11:23:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:32.240 11:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.240 ************************************ 00:30:32.240 START TEST raid_state_function_test_sb 00:30:32.240 ************************************ 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:30:32.240 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:30:32.241 Process raid pid: 65601 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=65601 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 65601' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 65601 /var/tmp/spdk-raid.sock 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 65601 ']' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:32.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:32.241 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.241 [2024-05-15 11:23:50.807124] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:30:32.241 [2024-05-15 11:23:50.807326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.502 [2024-05-15 11:23:50.969918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.760 [2024-05-15 11:23:51.213118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.019 [2024-05-15 11:23:51.403783] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.019 11:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:33.019 11:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:30:33.019 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:33.278 [2024-05-15 11:23:51.868569] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:33.278 [2024-05-15 11:23:51.868654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:33.278 [2024-05-15 11:23:51.868672] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:33.278 [2024-05-15 11:23:51.868692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:33.278 [2024-05-15 11:23:51.868700] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:33.278 [2024-05-15 11:23:51.868746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:33.278 [2024-05-15 11:23:51.868758] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:33.278 [2024-05-15 11:23:51.868781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.278 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.537 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:33.537 "name": "Existed_Raid", 00:30:33.537 "uuid": "0c55aa24-af9b-4502-94ee-04a38c91f40e", 00:30:33.537 "strip_size_kb": 64, 00:30:33.537 "state": "configuring", 00:30:33.537 "raid_level": "raid0", 00:30:33.537 "superblock": true, 00:30:33.537 "num_base_bdevs": 4, 00:30:33.537 "num_base_bdevs_discovered": 0, 00:30:33.537 "num_base_bdevs_operational": 4, 00:30:33.537 "base_bdevs_list": [ 00:30:33.537 { 00:30:33.537 "name": "BaseBdev1", 00:30:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.537 "is_configured": false, 00:30:33.537 "data_offset": 0, 00:30:33.537 "data_size": 0 00:30:33.537 }, 00:30:33.537 { 00:30:33.537 "name": "BaseBdev2", 00:30:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.537 "is_configured": false, 00:30:33.537 "data_offset": 0, 00:30:33.537 "data_size": 0 00:30:33.537 }, 00:30:33.537 { 00:30:33.537 "name": "BaseBdev3", 00:30:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.537 "is_configured": false, 00:30:33.537 "data_offset": 0, 00:30:33.537 "data_size": 0 00:30:33.537 }, 00:30:33.537 { 00:30:33.537 "name": "BaseBdev4", 00:30:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.537 "is_configured": false, 00:30:33.537 "data_offset": 0, 00:30:33.537 "data_size": 0 00:30:33.537 } 00:30:33.537 ] 00:30:33.537 }' 00:30:33.537 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:33.537 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.103 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:34.362 [2024-05-15 11:23:52.924540] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.362 [2024-05-15 11:23:52.924587] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:30:34.362 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:34.621 [2024-05-15 11:23:53.116604] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:34.621 [2024-05-15 11:23:53.116689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:34.621 [2024-05-15 11:23:53.116706] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:34.621 [2024-05-15 11:23:53.116734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:34.621 [2024-05-15 11:23:53.116744] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:34.621 [2024-05-15 11:23:53.116764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:34.621 [2024-05-15 11:23:53.116772] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:34.621 [2024-05-15 11:23:53.116800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:34.621 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:34.880 [2024-05-15 11:23:53.347535] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:34.880 BaseBdev1 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:34.880 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:35.184 [ 00:30:35.184 { 00:30:35.184 "name": "BaseBdev1", 00:30:35.184 "aliases": [ 00:30:35.184 "52b08108-9fc8-488d-9963-11c273000652" 00:30:35.184 ], 00:30:35.184 "product_name": "Malloc disk", 00:30:35.184 "block_size": 512, 00:30:35.184 "num_blocks": 65536, 00:30:35.184 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:35.184 "assigned_rate_limits": { 00:30:35.184 "rw_ios_per_sec": 0, 00:30:35.184 "rw_mbytes_per_sec": 0, 00:30:35.184 "r_mbytes_per_sec": 0, 00:30:35.184 "w_mbytes_per_sec": 0 00:30:35.184 }, 00:30:35.184 "claimed": true, 00:30:35.184 "claim_type": "exclusive_write", 00:30:35.184 "zoned": false, 00:30:35.184 "supported_io_types": { 00:30:35.184 "read": true, 00:30:35.184 "write": true, 00:30:35.184 "unmap": true, 00:30:35.184 "write_zeroes": true, 00:30:35.184 "flush": true, 00:30:35.184 "reset": true, 00:30:35.184 "compare": false, 00:30:35.184 "compare_and_write": false, 00:30:35.184 "abort": true, 00:30:35.184 "nvme_admin": false, 00:30:35.184 "nvme_io": false 00:30:35.184 }, 00:30:35.184 "memory_domains": [ 00:30:35.184 { 00:30:35.184 "dma_device_id": "system", 00:30:35.184 "dma_device_type": 1 00:30:35.184 }, 00:30:35.184 { 00:30:35.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.184 "dma_device_type": 2 00:30:35.184 } 00:30:35.184 ], 00:30:35.184 "driver_specific": {} 00:30:35.184 } 00:30:35.184 ] 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.184 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.443 11:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:35.443 "name": "Existed_Raid", 00:30:35.443 "uuid": "fbd8105e-a9c9-4f60-9681-70618a1ba43c", 00:30:35.443 "strip_size_kb": 64, 00:30:35.443 "state": "configuring", 00:30:35.443 "raid_level": "raid0", 00:30:35.443 "superblock": true, 00:30:35.443 "num_base_bdevs": 4, 00:30:35.443 "num_base_bdevs_discovered": 1, 00:30:35.443 "num_base_bdevs_operational": 4, 00:30:35.443 "base_bdevs_list": [ 00:30:35.443 { 00:30:35.443 "name": "BaseBdev1", 00:30:35.443 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:35.443 "is_configured": true, 00:30:35.443 "data_offset": 2048, 00:30:35.443 "data_size": 63488 00:30:35.443 }, 00:30:35.443 { 00:30:35.443 "name": "BaseBdev2", 00:30:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.443 "is_configured": false, 00:30:35.443 "data_offset": 0, 00:30:35.443 "data_size": 0 00:30:35.443 }, 00:30:35.443 { 00:30:35.443 "name": "BaseBdev3", 00:30:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.443 "is_configured": false, 00:30:35.443 "data_offset": 0, 00:30:35.443 "data_size": 0 00:30:35.443 }, 00:30:35.443 { 00:30:35.443 "name": "BaseBdev4", 00:30:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.443 "is_configured": false, 00:30:35.443 "data_offset": 0, 00:30:35.443 "data_size": 0 00:30:35.443 } 00:30:35.443 ] 00:30:35.443 }' 00:30:35.443 11:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:35.443 11:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.050 11:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:36.308 [2024-05-15 11:23:54.831896] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:36.308 [2024-05-15 11:23:54.831960] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:30:36.308 11:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:36.568 [2024-05-15 11:23:55.076043] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:36.568 [2024-05-15 11:23:55.077444] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.568 [2024-05-15 11:23:55.077535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.568 [2024-05-15 11:23:55.077576] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.568 [2024-05-15 11:23:55.077604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.568 [2024-05-15 11:23:55.077614] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:36.568 [2024-05-15 11:23:55.077632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.568 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.827 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:36.827 "name": "Existed_Raid", 00:30:36.827 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:36.827 "strip_size_kb": 64, 00:30:36.827 "state": "configuring", 00:30:36.827 "raid_level": "raid0", 00:30:36.827 "superblock": true, 00:30:36.827 "num_base_bdevs": 4, 00:30:36.827 "num_base_bdevs_discovered": 1, 00:30:36.827 "num_base_bdevs_operational": 4, 00:30:36.827 "base_bdevs_list": [ 00:30:36.827 { 00:30:36.827 "name": "BaseBdev1", 00:30:36.827 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:36.827 "is_configured": true, 00:30:36.827 "data_offset": 2048, 00:30:36.827 "data_size": 63488 00:30:36.827 }, 00:30:36.827 { 00:30:36.827 "name": "BaseBdev2", 00:30:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.827 "is_configured": false, 00:30:36.827 "data_offset": 0, 00:30:36.827 "data_size": 0 00:30:36.827 }, 00:30:36.827 { 00:30:36.827 "name": "BaseBdev3", 00:30:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.827 "is_configured": false, 00:30:36.827 "data_offset": 0, 00:30:36.827 "data_size": 0 00:30:36.827 }, 00:30:36.827 { 00:30:36.827 "name": "BaseBdev4", 00:30:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.827 "is_configured": false, 00:30:36.827 "data_offset": 0, 00:30:36.827 "data_size": 0 00:30:36.827 } 00:30:36.827 ] 00:30:36.827 }' 00:30:36.827 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:36.827 11:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.395 11:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:37.654 [2024-05-15 11:23:56.190583] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:37.654 BaseBdev2 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:37.654 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:37.913 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:38.172 [ 00:30:38.172 { 00:30:38.172 "name": "BaseBdev2", 00:30:38.172 "aliases": [ 00:30:38.172 "59ef19ff-0f94-4339-8168-b661670d60de" 00:30:38.172 ], 00:30:38.172 "product_name": "Malloc disk", 00:30:38.172 "block_size": 512, 00:30:38.172 "num_blocks": 65536, 00:30:38.172 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:38.172 "assigned_rate_limits": { 00:30:38.172 "rw_ios_per_sec": 0, 00:30:38.172 "rw_mbytes_per_sec": 0, 00:30:38.172 "r_mbytes_per_sec": 0, 00:30:38.172 "w_mbytes_per_sec": 0 00:30:38.172 }, 00:30:38.172 "claimed": true, 00:30:38.172 "claim_type": "exclusive_write", 00:30:38.172 "zoned": false, 00:30:38.172 "supported_io_types": { 00:30:38.172 "read": true, 00:30:38.172 "write": true, 00:30:38.172 "unmap": true, 00:30:38.172 "write_zeroes": true, 00:30:38.172 "flush": true, 00:30:38.172 "reset": true, 00:30:38.172 "compare": false, 00:30:38.172 "compare_and_write": false, 00:30:38.172 "abort": true, 00:30:38.172 "nvme_admin": false, 00:30:38.172 "nvme_io": false 00:30:38.172 }, 00:30:38.172 "memory_domains": [ 00:30:38.172 { 00:30:38.172 "dma_device_id": "system", 00:30:38.172 "dma_device_type": 1 00:30:38.172 }, 00:30:38.172 { 00:30:38.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.172 "dma_device_type": 2 00:30:38.172 } 00:30:38.172 ], 00:30:38.172 "driver_specific": {} 00:30:38.172 } 00:30:38.172 ] 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.172 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.431 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:38.431 "name": "Existed_Raid", 00:30:38.431 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:38.431 "strip_size_kb": 64, 00:30:38.431 "state": "configuring", 00:30:38.431 "raid_level": "raid0", 00:30:38.431 "superblock": true, 00:30:38.431 "num_base_bdevs": 4, 00:30:38.431 "num_base_bdevs_discovered": 2, 00:30:38.431 "num_base_bdevs_operational": 4, 00:30:38.431 "base_bdevs_list": [ 00:30:38.431 { 00:30:38.431 "name": "BaseBdev1", 00:30:38.431 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:38.431 "is_configured": true, 00:30:38.431 "data_offset": 2048, 00:30:38.431 "data_size": 63488 00:30:38.431 }, 00:30:38.431 { 00:30:38.431 "name": "BaseBdev2", 00:30:38.431 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:38.431 "is_configured": true, 00:30:38.431 "data_offset": 2048, 00:30:38.431 "data_size": 63488 00:30:38.431 }, 00:30:38.431 { 00:30:38.431 "name": "BaseBdev3", 00:30:38.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.431 "is_configured": false, 00:30:38.431 "data_offset": 0, 00:30:38.431 "data_size": 0 00:30:38.431 }, 00:30:38.431 { 00:30:38.431 "name": "BaseBdev4", 00:30:38.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.431 "is_configured": false, 00:30:38.431 "data_offset": 0, 00:30:38.431 "data_size": 0 00:30:38.431 } 00:30:38.431 ] 00:30:38.431 }' 00:30:38.431 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:38.431 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.015 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:39.273 [2024-05-15 11:23:57.682817] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:39.273 BaseBdev3 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:39.273 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:39.532 [ 00:30:39.532 { 00:30:39.532 "name": "BaseBdev3", 00:30:39.532 "aliases": [ 00:30:39.532 "3d492bf3-f7a8-4b23-b10c-38ec295c79b2" 00:30:39.532 ], 00:30:39.532 "product_name": "Malloc disk", 00:30:39.532 "block_size": 512, 00:30:39.532 "num_blocks": 65536, 00:30:39.532 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:39.532 "assigned_rate_limits": { 00:30:39.532 "rw_ios_per_sec": 0, 00:30:39.532 "rw_mbytes_per_sec": 0, 00:30:39.532 "r_mbytes_per_sec": 0, 00:30:39.532 "w_mbytes_per_sec": 0 00:30:39.532 }, 00:30:39.532 "claimed": true, 00:30:39.532 "claim_type": "exclusive_write", 00:30:39.532 "zoned": false, 00:30:39.532 "supported_io_types": { 00:30:39.532 "read": true, 00:30:39.532 "write": true, 00:30:39.532 "unmap": true, 00:30:39.532 "write_zeroes": true, 00:30:39.532 "flush": true, 00:30:39.532 "reset": true, 00:30:39.532 "compare": false, 00:30:39.532 "compare_and_write": false, 00:30:39.532 "abort": true, 00:30:39.532 "nvme_admin": false, 00:30:39.532 "nvme_io": false 00:30:39.532 }, 00:30:39.532 "memory_domains": [ 00:30:39.532 { 00:30:39.532 "dma_device_id": "system", 00:30:39.532 "dma_device_type": 1 00:30:39.532 }, 00:30:39.532 { 00:30:39.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.532 "dma_device_type": 2 00:30:39.532 } 00:30:39.532 ], 00:30:39.532 "driver_specific": {} 00:30:39.532 } 00:30:39.532 ] 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.532 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.790 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:39.790 "name": "Existed_Raid", 00:30:39.790 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:39.790 "strip_size_kb": 64, 00:30:39.790 "state": "configuring", 00:30:39.790 "raid_level": "raid0", 00:30:39.790 "superblock": true, 00:30:39.790 "num_base_bdevs": 4, 00:30:39.790 "num_base_bdevs_discovered": 3, 00:30:39.790 "num_base_bdevs_operational": 4, 00:30:39.790 "base_bdevs_list": [ 00:30:39.790 { 00:30:39.790 "name": "BaseBdev1", 00:30:39.790 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:39.790 "is_configured": true, 00:30:39.790 "data_offset": 2048, 00:30:39.790 "data_size": 63488 00:30:39.790 }, 00:30:39.790 { 00:30:39.790 "name": "BaseBdev2", 00:30:39.790 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:39.790 "is_configured": true, 00:30:39.790 "data_offset": 2048, 00:30:39.790 "data_size": 63488 00:30:39.790 }, 00:30:39.790 { 00:30:39.790 "name": "BaseBdev3", 00:30:39.790 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:39.790 "is_configured": true, 00:30:39.790 "data_offset": 2048, 00:30:39.790 "data_size": 63488 00:30:39.790 }, 00:30:39.790 { 00:30:39.790 "name": "BaseBdev4", 00:30:39.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.790 "is_configured": false, 00:30:39.790 "data_offset": 0, 00:30:39.790 "data_size": 0 00:30:39.790 } 00:30:39.790 ] 00:30:39.790 }' 00:30:39.790 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:39.790 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.355 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:40.613 BaseBdev4 00:30:40.613 [2024-05-15 11:23:59.206945] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:40.613 [2024-05-15 11:23:59.207129] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:30:40.613 [2024-05-15 11:23:59.207145] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:40.613 [2024-05-15 11:23:59.207262] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:30:40.613 [2024-05-15 11:23:59.207479] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:30:40.613 [2024-05-15 11:23:59.207495] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:30:40.613 [2024-05-15 11:23:59.207635] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:40.613 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:40.872 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:41.130 [ 00:30:41.130 { 00:30:41.130 "name": "BaseBdev4", 00:30:41.130 "aliases": [ 00:30:41.130 "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2" 00:30:41.130 ], 00:30:41.130 "product_name": "Malloc disk", 00:30:41.130 "block_size": 512, 00:30:41.130 "num_blocks": 65536, 00:30:41.130 "uuid": "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2", 00:30:41.130 "assigned_rate_limits": { 00:30:41.130 "rw_ios_per_sec": 0, 00:30:41.130 "rw_mbytes_per_sec": 0, 00:30:41.130 "r_mbytes_per_sec": 0, 00:30:41.130 "w_mbytes_per_sec": 0 00:30:41.130 }, 00:30:41.130 "claimed": true, 00:30:41.130 "claim_type": "exclusive_write", 00:30:41.130 "zoned": false, 00:30:41.130 "supported_io_types": { 00:30:41.130 "read": true, 00:30:41.130 "write": true, 00:30:41.130 "unmap": true, 00:30:41.130 "write_zeroes": true, 00:30:41.130 "flush": true, 00:30:41.130 "reset": true, 00:30:41.130 "compare": false, 00:30:41.130 "compare_and_write": false, 00:30:41.130 "abort": true, 00:30:41.130 "nvme_admin": false, 00:30:41.130 "nvme_io": false 00:30:41.130 }, 00:30:41.130 "memory_domains": [ 00:30:41.130 { 00:30:41.130 "dma_device_id": "system", 00:30:41.130 "dma_device_type": 1 00:30:41.130 }, 00:30:41.130 { 00:30:41.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.130 "dma_device_type": 2 00:30:41.130 } 00:30:41.130 ], 00:30:41.130 "driver_specific": {} 00:30:41.130 } 00:30:41.130 ] 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.130 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.388 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:41.388 "name": "Existed_Raid", 00:30:41.388 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:41.388 "strip_size_kb": 64, 00:30:41.388 "state": "online", 00:30:41.388 "raid_level": "raid0", 00:30:41.388 "superblock": true, 00:30:41.388 "num_base_bdevs": 4, 00:30:41.388 "num_base_bdevs_discovered": 4, 00:30:41.388 "num_base_bdevs_operational": 4, 00:30:41.388 "base_bdevs_list": [ 00:30:41.388 { 00:30:41.388 "name": "BaseBdev1", 00:30:41.388 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:41.388 "is_configured": true, 00:30:41.388 "data_offset": 2048, 00:30:41.388 "data_size": 63488 00:30:41.388 }, 00:30:41.388 { 00:30:41.388 "name": "BaseBdev2", 00:30:41.388 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:41.388 "is_configured": true, 00:30:41.388 "data_offset": 2048, 00:30:41.388 "data_size": 63488 00:30:41.388 }, 00:30:41.388 { 00:30:41.388 "name": "BaseBdev3", 00:30:41.388 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:41.388 "is_configured": true, 00:30:41.388 "data_offset": 2048, 00:30:41.388 "data_size": 63488 00:30:41.388 }, 00:30:41.388 { 00:30:41.388 "name": "BaseBdev4", 00:30:41.388 "uuid": "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2", 00:30:41.388 "is_configured": true, 00:30:41.388 "data_offset": 2048, 00:30:41.388 "data_size": 63488 00:30:41.388 } 00:30:41.388 ] 00:30:41.388 }' 00:30:41.388 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:41.388 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:41.953 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:30:42.211 [2024-05-15 11:24:00.727452] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.211 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:30:42.211 "name": "Existed_Raid", 00:30:42.211 "aliases": [ 00:30:42.211 "18581814-3c3c-4b77-a689-67dd0f0769b8" 00:30:42.211 ], 00:30:42.211 "product_name": "Raid Volume", 00:30:42.211 "block_size": 512, 00:30:42.211 "num_blocks": 253952, 00:30:42.212 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:42.212 "assigned_rate_limits": { 00:30:42.212 "rw_ios_per_sec": 0, 00:30:42.212 "rw_mbytes_per_sec": 0, 00:30:42.212 "r_mbytes_per_sec": 0, 00:30:42.212 "w_mbytes_per_sec": 0 00:30:42.212 }, 00:30:42.212 "claimed": false, 00:30:42.212 "zoned": false, 00:30:42.212 "supported_io_types": { 00:30:42.212 "read": true, 00:30:42.212 "write": true, 00:30:42.212 "unmap": true, 00:30:42.212 "write_zeroes": true, 00:30:42.212 "flush": true, 00:30:42.212 "reset": true, 00:30:42.212 "compare": false, 00:30:42.212 "compare_and_write": false, 00:30:42.212 "abort": false, 00:30:42.212 "nvme_admin": false, 00:30:42.212 "nvme_io": false 00:30:42.212 }, 00:30:42.212 "memory_domains": [ 00:30:42.212 { 00:30:42.212 "dma_device_id": "system", 00:30:42.212 "dma_device_type": 1 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.212 "dma_device_type": 2 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "system", 00:30:42.212 "dma_device_type": 1 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.212 "dma_device_type": 2 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "system", 00:30:42.212 "dma_device_type": 1 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.212 "dma_device_type": 2 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "system", 00:30:42.212 "dma_device_type": 1 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.212 "dma_device_type": 2 00:30:42.212 } 00:30:42.212 ], 00:30:42.212 "driver_specific": { 00:30:42.212 "raid": { 00:30:42.212 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:42.212 "strip_size_kb": 64, 00:30:42.212 "state": "online", 00:30:42.212 "raid_level": "raid0", 00:30:42.212 "superblock": true, 00:30:42.212 "num_base_bdevs": 4, 00:30:42.212 "num_base_bdevs_discovered": 4, 00:30:42.212 "num_base_bdevs_operational": 4, 00:30:42.212 "base_bdevs_list": [ 00:30:42.212 { 00:30:42.212 "name": "BaseBdev1", 00:30:42.212 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:42.212 "is_configured": true, 00:30:42.212 "data_offset": 2048, 00:30:42.212 "data_size": 63488 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "name": "BaseBdev2", 00:30:42.212 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:42.212 "is_configured": true, 00:30:42.212 "data_offset": 2048, 00:30:42.212 "data_size": 63488 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "name": "BaseBdev3", 00:30:42.212 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:42.212 "is_configured": true, 00:30:42.212 "data_offset": 2048, 00:30:42.212 "data_size": 63488 00:30:42.212 }, 00:30:42.212 { 00:30:42.212 "name": "BaseBdev4", 00:30:42.212 "uuid": "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2", 00:30:42.212 "is_configured": true, 00:30:42.212 "data_offset": 2048, 00:30:42.212 "data_size": 63488 00:30:42.212 } 00:30:42.212 ] 00:30:42.212 } 00:30:42.212 } 00:30:42.212 }' 00:30:42.212 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:42.212 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:30:42.212 BaseBdev2 00:30:42.212 BaseBdev3 00:30:42.212 BaseBdev4' 00:30:42.212 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:42.212 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:42.212 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:42.470 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:42.470 "name": "BaseBdev1", 00:30:42.470 "aliases": [ 00:30:42.470 "52b08108-9fc8-488d-9963-11c273000652" 00:30:42.470 ], 00:30:42.470 "product_name": "Malloc disk", 00:30:42.470 "block_size": 512, 00:30:42.470 "num_blocks": 65536, 00:30:42.470 "uuid": "52b08108-9fc8-488d-9963-11c273000652", 00:30:42.470 "assigned_rate_limits": { 00:30:42.470 "rw_ios_per_sec": 0, 00:30:42.470 "rw_mbytes_per_sec": 0, 00:30:42.470 "r_mbytes_per_sec": 0, 00:30:42.470 "w_mbytes_per_sec": 0 00:30:42.470 }, 00:30:42.470 "claimed": true, 00:30:42.470 "claim_type": "exclusive_write", 00:30:42.470 "zoned": false, 00:30:42.470 "supported_io_types": { 00:30:42.470 "read": true, 00:30:42.471 "write": true, 00:30:42.471 "unmap": true, 00:30:42.471 "write_zeroes": true, 00:30:42.471 "flush": true, 00:30:42.471 "reset": true, 00:30:42.471 "compare": false, 00:30:42.471 "compare_and_write": false, 00:30:42.471 "abort": true, 00:30:42.471 "nvme_admin": false, 00:30:42.471 "nvme_io": false 00:30:42.471 }, 00:30:42.471 "memory_domains": [ 00:30:42.471 { 00:30:42.471 "dma_device_id": "system", 00:30:42.471 "dma_device_type": 1 00:30:42.471 }, 00:30:42.471 { 00:30:42.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.471 "dma_device_type": 2 00:30:42.471 } 00:30:42.471 ], 00:30:42.471 "driver_specific": {} 00:30:42.471 }' 00:30:42.471 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:42.729 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:42.987 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:43.245 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:43.245 "name": "BaseBdev2", 00:30:43.245 "aliases": [ 00:30:43.245 "59ef19ff-0f94-4339-8168-b661670d60de" 00:30:43.245 ], 00:30:43.245 "product_name": "Malloc disk", 00:30:43.245 "block_size": 512, 00:30:43.245 "num_blocks": 65536, 00:30:43.245 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:43.245 "assigned_rate_limits": { 00:30:43.245 "rw_ios_per_sec": 0, 00:30:43.245 "rw_mbytes_per_sec": 0, 00:30:43.245 "r_mbytes_per_sec": 0, 00:30:43.245 "w_mbytes_per_sec": 0 00:30:43.245 }, 00:30:43.245 "claimed": true, 00:30:43.245 "claim_type": "exclusive_write", 00:30:43.245 "zoned": false, 00:30:43.245 "supported_io_types": { 00:30:43.245 "read": true, 00:30:43.245 "write": true, 00:30:43.245 "unmap": true, 00:30:43.245 "write_zeroes": true, 00:30:43.245 "flush": true, 00:30:43.245 "reset": true, 00:30:43.245 "compare": false, 00:30:43.245 "compare_and_write": false, 00:30:43.245 "abort": true, 00:30:43.245 "nvme_admin": false, 00:30:43.245 "nvme_io": false 00:30:43.245 }, 00:30:43.245 "memory_domains": [ 00:30:43.245 { 00:30:43.245 "dma_device_id": "system", 00:30:43.245 "dma_device_type": 1 00:30:43.245 }, 00:30:43.245 { 00:30:43.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.245 "dma_device_type": 2 00:30:43.245 } 00:30:43.245 ], 00:30:43.245 "driver_specific": {} 00:30:43.245 }' 00:30:43.245 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:43.503 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:43.503 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:43.503 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:43.503 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:43.503 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:43.503 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:43.503 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:43.761 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:44.019 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:44.019 "name": "BaseBdev3", 00:30:44.019 "aliases": [ 00:30:44.019 "3d492bf3-f7a8-4b23-b10c-38ec295c79b2" 00:30:44.019 ], 00:30:44.019 "product_name": "Malloc disk", 00:30:44.019 "block_size": 512, 00:30:44.019 "num_blocks": 65536, 00:30:44.019 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:44.019 "assigned_rate_limits": { 00:30:44.019 "rw_ios_per_sec": 0, 00:30:44.019 "rw_mbytes_per_sec": 0, 00:30:44.019 "r_mbytes_per_sec": 0, 00:30:44.019 "w_mbytes_per_sec": 0 00:30:44.019 }, 00:30:44.019 "claimed": true, 00:30:44.019 "claim_type": "exclusive_write", 00:30:44.019 "zoned": false, 00:30:44.019 "supported_io_types": { 00:30:44.019 "read": true, 00:30:44.019 "write": true, 00:30:44.019 "unmap": true, 00:30:44.019 "write_zeroes": true, 00:30:44.019 "flush": true, 00:30:44.019 "reset": true, 00:30:44.019 "compare": false, 00:30:44.019 "compare_and_write": false, 00:30:44.019 "abort": true, 00:30:44.019 "nvme_admin": false, 00:30:44.019 "nvme_io": false 00:30:44.019 }, 00:30:44.019 "memory_domains": [ 00:30:44.019 { 00:30:44.019 "dma_device_id": "system", 00:30:44.019 "dma_device_type": 1 00:30:44.019 }, 00:30:44.019 { 00:30:44.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.019 "dma_device_type": 2 00:30:44.019 } 00:30:44.019 ], 00:30:44.019 "driver_specific": {} 00:30:44.019 }' 00:30:44.019 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:44.019 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:44.019 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:44.019 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:44.277 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:44.535 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:44.535 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:44.535 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:30:44.535 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:44.535 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:30:44.793 "name": "BaseBdev4", 00:30:44.793 "aliases": [ 00:30:44.793 "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2" 00:30:44.793 ], 00:30:44.793 "product_name": "Malloc disk", 00:30:44.793 "block_size": 512, 00:30:44.793 "num_blocks": 65536, 00:30:44.793 "uuid": "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2", 00:30:44.793 "assigned_rate_limits": { 00:30:44.793 "rw_ios_per_sec": 0, 00:30:44.793 "rw_mbytes_per_sec": 0, 00:30:44.793 "r_mbytes_per_sec": 0, 00:30:44.793 "w_mbytes_per_sec": 0 00:30:44.793 }, 00:30:44.793 "claimed": true, 00:30:44.793 "claim_type": "exclusive_write", 00:30:44.793 "zoned": false, 00:30:44.793 "supported_io_types": { 00:30:44.793 "read": true, 00:30:44.793 "write": true, 00:30:44.793 "unmap": true, 00:30:44.793 "write_zeroes": true, 00:30:44.793 "flush": true, 00:30:44.793 "reset": true, 00:30:44.793 "compare": false, 00:30:44.793 "compare_and_write": false, 00:30:44.793 "abort": true, 00:30:44.793 "nvme_admin": false, 00:30:44.793 "nvme_io": false 00:30:44.793 }, 00:30:44.793 "memory_domains": [ 00:30:44.793 { 00:30:44.793 "dma_device_id": "system", 00:30:44.793 "dma_device_type": 1 00:30:44.793 }, 00:30:44.793 { 00:30:44.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.793 "dma_device_type": 2 00:30:44.793 } 00:30:44.793 ], 00:30:44.793 "driver_specific": {} 00:30:44.793 }' 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:44.793 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:45.050 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:30:45.306 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:30:45.306 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:45.306 [2024-05-15 11:24:03.895917] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:45.306 [2024-05-15 11:24:03.895964] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:45.306 [2024-05-15 11:24:03.896016] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.564 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.564 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:45.564 "name": "Existed_Raid", 00:30:45.564 "uuid": "18581814-3c3c-4b77-a689-67dd0f0769b8", 00:30:45.564 "strip_size_kb": 64, 00:30:45.564 "state": "offline", 00:30:45.564 "raid_level": "raid0", 00:30:45.564 "superblock": true, 00:30:45.564 "num_base_bdevs": 4, 00:30:45.564 "num_base_bdevs_discovered": 3, 00:30:45.564 "num_base_bdevs_operational": 3, 00:30:45.564 "base_bdevs_list": [ 00:30:45.564 { 00:30:45.564 "name": null, 00:30:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.564 "is_configured": false, 00:30:45.564 "data_offset": 2048, 00:30:45.564 "data_size": 63488 00:30:45.564 }, 00:30:45.564 { 00:30:45.564 "name": "BaseBdev2", 00:30:45.564 "uuid": "59ef19ff-0f94-4339-8168-b661670d60de", 00:30:45.564 "is_configured": true, 00:30:45.564 "data_offset": 2048, 00:30:45.564 "data_size": 63488 00:30:45.564 }, 00:30:45.564 { 00:30:45.564 "name": "BaseBdev3", 00:30:45.564 "uuid": "3d492bf3-f7a8-4b23-b10c-38ec295c79b2", 00:30:45.564 "is_configured": true, 00:30:45.564 "data_offset": 2048, 00:30:45.564 "data_size": 63488 00:30:45.564 }, 00:30:45.564 { 00:30:45.564 "name": "BaseBdev4", 00:30:45.564 "uuid": "f2e5fcf3-0531-4c6c-b06e-111a43ebc8d2", 00:30:45.564 "is_configured": true, 00:30:45.564 "data_offset": 2048, 00:30:45.564 "data_size": 63488 00:30:45.564 } 00:30:45.564 ] 00:30:45.564 }' 00:30:45.564 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:45.564 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.498 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:46.498 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:46.498 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.498 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:46.756 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:46.756 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:46.756 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:46.756 [2024-05-15 11:24:05.350877] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:47.014 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:47.272 [2024-05-15 11:24:05.826522] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:47.530 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:47.530 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:47.530 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.530 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:30:47.530 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:30:47.530 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:47.530 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:30:47.789 [2024-05-15 11:24:06.341505] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:47.789 [2024-05-15 11:24:06.341575] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:30:48.047 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:48.047 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:48.047 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.047 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:48.306 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:48.564 BaseBdev2 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:48.564 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:48.564 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:48.823 [ 00:30:48.823 { 00:30:48.823 "name": "BaseBdev2", 00:30:48.823 "aliases": [ 00:30:48.823 "604c5683-dec1-45b3-ac00-20592dcb625a" 00:30:48.823 ], 00:30:48.823 "product_name": "Malloc disk", 00:30:48.823 "block_size": 512, 00:30:48.823 "num_blocks": 65536, 00:30:48.823 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:48.823 "assigned_rate_limits": { 00:30:48.823 "rw_ios_per_sec": 0, 00:30:48.823 "rw_mbytes_per_sec": 0, 00:30:48.823 "r_mbytes_per_sec": 0, 00:30:48.823 "w_mbytes_per_sec": 0 00:30:48.823 }, 00:30:48.823 "claimed": false, 00:30:48.823 "zoned": false, 00:30:48.823 "supported_io_types": { 00:30:48.823 "read": true, 00:30:48.823 "write": true, 00:30:48.823 "unmap": true, 00:30:48.823 "write_zeroes": true, 00:30:48.823 "flush": true, 00:30:48.823 "reset": true, 00:30:48.823 "compare": false, 00:30:48.823 "compare_and_write": false, 00:30:48.823 "abort": true, 00:30:48.823 "nvme_admin": false, 00:30:48.823 "nvme_io": false 00:30:48.823 }, 00:30:48.823 "memory_domains": [ 00:30:48.823 { 00:30:48.823 "dma_device_id": "system", 00:30:48.823 "dma_device_type": 1 00:30:48.823 }, 00:30:48.823 { 00:30:48.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.823 "dma_device_type": 2 00:30:48.823 } 00:30:48.823 ], 00:30:48.823 "driver_specific": {} 00:30:48.823 } 00:30:48.823 ] 00:30:48.823 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:48.823 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:48.823 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:48.823 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:49.082 BaseBdev3 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:49.082 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:49.374 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:49.632 [ 00:30:49.632 { 00:30:49.632 "name": "BaseBdev3", 00:30:49.632 "aliases": [ 00:30:49.632 "16687581-704a-4412-be22-9b90faf7163f" 00:30:49.632 ], 00:30:49.632 "product_name": "Malloc disk", 00:30:49.632 "block_size": 512, 00:30:49.632 "num_blocks": 65536, 00:30:49.632 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:49.632 "assigned_rate_limits": { 00:30:49.632 "rw_ios_per_sec": 0, 00:30:49.632 "rw_mbytes_per_sec": 0, 00:30:49.632 "r_mbytes_per_sec": 0, 00:30:49.632 "w_mbytes_per_sec": 0 00:30:49.632 }, 00:30:49.632 "claimed": false, 00:30:49.632 "zoned": false, 00:30:49.632 "supported_io_types": { 00:30:49.632 "read": true, 00:30:49.632 "write": true, 00:30:49.632 "unmap": true, 00:30:49.632 "write_zeroes": true, 00:30:49.632 "flush": true, 00:30:49.632 "reset": true, 00:30:49.632 "compare": false, 00:30:49.632 "compare_and_write": false, 00:30:49.632 "abort": true, 00:30:49.632 "nvme_admin": false, 00:30:49.632 "nvme_io": false 00:30:49.632 }, 00:30:49.632 "memory_domains": [ 00:30:49.632 { 00:30:49.632 "dma_device_id": "system", 00:30:49.632 "dma_device_type": 1 00:30:49.632 }, 00:30:49.633 { 00:30:49.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.633 "dma_device_type": 2 00:30:49.633 } 00:30:49.633 ], 00:30:49.633 "driver_specific": {} 00:30:49.633 } 00:30:49.633 ] 00:30:49.633 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:49.633 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:49.633 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:49.633 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:49.894 BaseBdev4 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:49.894 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:50.152 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:50.152 [ 00:30:50.152 { 00:30:50.152 "name": "BaseBdev4", 00:30:50.152 "aliases": [ 00:30:50.152 "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110" 00:30:50.152 ], 00:30:50.152 "product_name": "Malloc disk", 00:30:50.152 "block_size": 512, 00:30:50.152 "num_blocks": 65536, 00:30:50.152 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:50.152 "assigned_rate_limits": { 00:30:50.152 "rw_ios_per_sec": 0, 00:30:50.152 "rw_mbytes_per_sec": 0, 00:30:50.152 "r_mbytes_per_sec": 0, 00:30:50.152 "w_mbytes_per_sec": 0 00:30:50.152 }, 00:30:50.152 "claimed": false, 00:30:50.152 "zoned": false, 00:30:50.152 "supported_io_types": { 00:30:50.152 "read": true, 00:30:50.152 "write": true, 00:30:50.152 "unmap": true, 00:30:50.152 "write_zeroes": true, 00:30:50.152 "flush": true, 00:30:50.152 "reset": true, 00:30:50.152 "compare": false, 00:30:50.152 "compare_and_write": false, 00:30:50.152 "abort": true, 00:30:50.152 "nvme_admin": false, 00:30:50.152 "nvme_io": false 00:30:50.152 }, 00:30:50.152 "memory_domains": [ 00:30:50.152 { 00:30:50.152 "dma_device_id": "system", 00:30:50.152 "dma_device_type": 1 00:30:50.152 }, 00:30:50.152 { 00:30:50.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.152 "dma_device_type": 2 00:30:50.152 } 00:30:50.152 ], 00:30:50.152 "driver_specific": {} 00:30:50.152 } 00:30:50.152 ] 00:30:50.152 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:50.152 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:30:50.152 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:30:50.152 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:50.410 [2024-05-15 11:24:08.963104] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:50.410 [2024-05-15 11:24:08.963179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:50.410 [2024-05-15 11:24:08.963244] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:50.410 [2024-05-15 11:24:08.964801] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:50.411 [2024-05-15 11:24:08.964854] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.411 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.670 11:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:50.670 "name": "Existed_Raid", 00:30:50.670 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:50.670 "strip_size_kb": 64, 00:30:50.670 "state": "configuring", 00:30:50.670 "raid_level": "raid0", 00:30:50.670 "superblock": true, 00:30:50.670 "num_base_bdevs": 4, 00:30:50.670 "num_base_bdevs_discovered": 3, 00:30:50.670 "num_base_bdevs_operational": 4, 00:30:50.670 "base_bdevs_list": [ 00:30:50.670 { 00:30:50.670 "name": "BaseBdev1", 00:30:50.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.670 "is_configured": false, 00:30:50.670 "data_offset": 0, 00:30:50.670 "data_size": 0 00:30:50.670 }, 00:30:50.670 { 00:30:50.670 "name": "BaseBdev2", 00:30:50.670 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:50.670 "is_configured": true, 00:30:50.670 "data_offset": 2048, 00:30:50.670 "data_size": 63488 00:30:50.670 }, 00:30:50.670 { 00:30:50.670 "name": "BaseBdev3", 00:30:50.670 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:50.670 "is_configured": true, 00:30:50.670 "data_offset": 2048, 00:30:50.670 "data_size": 63488 00:30:50.670 }, 00:30:50.670 { 00:30:50.670 "name": "BaseBdev4", 00:30:50.670 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:50.670 "is_configured": true, 00:30:50.670 "data_offset": 2048, 00:30:50.670 "data_size": 63488 00:30:50.670 } 00:30:50.670 ] 00:30:50.670 }' 00:30:50.670 11:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:50.670 11:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.605 11:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:51.605 [2024-05-15 11:24:10.147322] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.605 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.865 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:51.865 "name": "Existed_Raid", 00:30:51.865 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:51.865 "strip_size_kb": 64, 00:30:51.865 "state": "configuring", 00:30:51.865 "raid_level": "raid0", 00:30:51.865 "superblock": true, 00:30:51.865 "num_base_bdevs": 4, 00:30:51.865 "num_base_bdevs_discovered": 2, 00:30:51.865 "num_base_bdevs_operational": 4, 00:30:51.865 "base_bdevs_list": [ 00:30:51.865 { 00:30:51.865 "name": "BaseBdev1", 00:30:51.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.865 "is_configured": false, 00:30:51.865 "data_offset": 0, 00:30:51.865 "data_size": 0 00:30:51.865 }, 00:30:51.865 { 00:30:51.865 "name": null, 00:30:51.865 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:51.865 "is_configured": false, 00:30:51.865 "data_offset": 2048, 00:30:51.865 "data_size": 63488 00:30:51.865 }, 00:30:51.865 { 00:30:51.865 "name": "BaseBdev3", 00:30:51.865 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:51.865 "is_configured": true, 00:30:51.865 "data_offset": 2048, 00:30:51.865 "data_size": 63488 00:30:51.865 }, 00:30:51.865 { 00:30:51.865 "name": "BaseBdev4", 00:30:51.865 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:51.865 "is_configured": true, 00:30:51.865 "data_offset": 2048, 00:30:51.865 "data_size": 63488 00:30:51.865 } 00:30:51.865 ] 00:30:51.865 }' 00:30:51.865 11:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:51.865 11:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.805 11:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.805 11:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:52.805 11:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:30:52.805 11:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:53.064 [2024-05-15 11:24:11.539638] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:53.064 BaseBdev1 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:53.064 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:53.322 11:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:53.581 [ 00:30:53.581 { 00:30:53.581 "name": "BaseBdev1", 00:30:53.581 "aliases": [ 00:30:53.581 "7be0562d-9e7e-4861-808a-4a589f5537a7" 00:30:53.581 ], 00:30:53.581 "product_name": "Malloc disk", 00:30:53.581 "block_size": 512, 00:30:53.581 "num_blocks": 65536, 00:30:53.581 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:53.581 "assigned_rate_limits": { 00:30:53.581 "rw_ios_per_sec": 0, 00:30:53.581 "rw_mbytes_per_sec": 0, 00:30:53.581 "r_mbytes_per_sec": 0, 00:30:53.581 "w_mbytes_per_sec": 0 00:30:53.581 }, 00:30:53.581 "claimed": true, 00:30:53.581 "claim_type": "exclusive_write", 00:30:53.581 "zoned": false, 00:30:53.581 "supported_io_types": { 00:30:53.581 "read": true, 00:30:53.581 "write": true, 00:30:53.581 "unmap": true, 00:30:53.581 "write_zeroes": true, 00:30:53.581 "flush": true, 00:30:53.581 "reset": true, 00:30:53.581 "compare": false, 00:30:53.581 "compare_and_write": false, 00:30:53.581 "abort": true, 00:30:53.581 "nvme_admin": false, 00:30:53.581 "nvme_io": false 00:30:53.581 }, 00:30:53.581 "memory_domains": [ 00:30:53.581 { 00:30:53.581 "dma_device_id": "system", 00:30:53.581 "dma_device_type": 1 00:30:53.581 }, 00:30:53.581 { 00:30:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.581 "dma_device_type": 2 00:30:53.581 } 00:30:53.581 ], 00:30:53.581 "driver_specific": {} 00:30:53.581 } 00:30:53.581 ] 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.581 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.840 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:53.840 "name": "Existed_Raid", 00:30:53.840 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:53.840 "strip_size_kb": 64, 00:30:53.840 "state": "configuring", 00:30:53.840 "raid_level": "raid0", 00:30:53.840 "superblock": true, 00:30:53.840 "num_base_bdevs": 4, 00:30:53.840 "num_base_bdevs_discovered": 3, 00:30:53.840 "num_base_bdevs_operational": 4, 00:30:53.840 "base_bdevs_list": [ 00:30:53.840 { 00:30:53.840 "name": "BaseBdev1", 00:30:53.840 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:53.840 "is_configured": true, 00:30:53.840 "data_offset": 2048, 00:30:53.840 "data_size": 63488 00:30:53.840 }, 00:30:53.840 { 00:30:53.840 "name": null, 00:30:53.840 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:53.840 "is_configured": false, 00:30:53.840 "data_offset": 2048, 00:30:53.840 "data_size": 63488 00:30:53.840 }, 00:30:53.840 { 00:30:53.840 "name": "BaseBdev3", 00:30:53.840 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:53.840 "is_configured": true, 00:30:53.840 "data_offset": 2048, 00:30:53.840 "data_size": 63488 00:30:53.840 }, 00:30:53.840 { 00:30:53.840 "name": "BaseBdev4", 00:30:53.840 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:53.840 "is_configured": true, 00:30:53.840 "data_offset": 2048, 00:30:53.840 "data_size": 63488 00:30:53.840 } 00:30:53.840 ] 00:30:53.840 }' 00:30:53.840 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:53.840 11:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.407 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.407 11:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:54.665 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:54.665 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:54.924 [2024-05-15 11:24:13.319938] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.924 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:54.924 "name": "Existed_Raid", 00:30:54.924 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:54.924 "strip_size_kb": 64, 00:30:54.924 "state": "configuring", 00:30:54.924 "raid_level": "raid0", 00:30:54.924 "superblock": true, 00:30:54.924 "num_base_bdevs": 4, 00:30:54.924 "num_base_bdevs_discovered": 2, 00:30:54.924 "num_base_bdevs_operational": 4, 00:30:54.924 "base_bdevs_list": [ 00:30:54.924 { 00:30:54.924 "name": "BaseBdev1", 00:30:54.924 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:54.924 "is_configured": true, 00:30:54.924 "data_offset": 2048, 00:30:54.924 "data_size": 63488 00:30:54.924 }, 00:30:54.924 { 00:30:54.924 "name": null, 00:30:54.924 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:54.924 "is_configured": false, 00:30:54.924 "data_offset": 2048, 00:30:54.924 "data_size": 63488 00:30:54.924 }, 00:30:54.924 { 00:30:54.924 "name": null, 00:30:54.924 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:54.924 "is_configured": false, 00:30:54.924 "data_offset": 2048, 00:30:54.924 "data_size": 63488 00:30:54.924 }, 00:30:54.924 { 00:30:54.924 "name": "BaseBdev4", 00:30:54.924 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:54.924 "is_configured": true, 00:30:54.924 "data_offset": 2048, 00:30:54.924 "data_size": 63488 00:30:54.924 } 00:30:54.925 ] 00:30:54.925 }' 00:30:54.925 11:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:54.925 11:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.857 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.857 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:56.129 [2024-05-15 11:24:14.708337] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.129 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.400 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:56.400 "name": "Existed_Raid", 00:30:56.400 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:56.400 "strip_size_kb": 64, 00:30:56.400 "state": "configuring", 00:30:56.400 "raid_level": "raid0", 00:30:56.400 "superblock": true, 00:30:56.400 "num_base_bdevs": 4, 00:30:56.400 "num_base_bdevs_discovered": 3, 00:30:56.400 "num_base_bdevs_operational": 4, 00:30:56.400 "base_bdevs_list": [ 00:30:56.400 { 00:30:56.400 "name": "BaseBdev1", 00:30:56.400 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:56.400 "is_configured": true, 00:30:56.400 "data_offset": 2048, 00:30:56.400 "data_size": 63488 00:30:56.400 }, 00:30:56.400 { 00:30:56.400 "name": null, 00:30:56.400 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:56.400 "is_configured": false, 00:30:56.400 "data_offset": 2048, 00:30:56.400 "data_size": 63488 00:30:56.400 }, 00:30:56.400 { 00:30:56.400 "name": "BaseBdev3", 00:30:56.400 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:56.400 "is_configured": true, 00:30:56.400 "data_offset": 2048, 00:30:56.400 "data_size": 63488 00:30:56.400 }, 00:30:56.400 { 00:30:56.400 "name": "BaseBdev4", 00:30:56.400 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:56.400 "is_configured": true, 00:30:56.400 "data_offset": 2048, 00:30:56.400 "data_size": 63488 00:30:56.400 } 00:30:56.400 ] 00:30:56.400 }' 00:30:56.400 11:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:56.400 11:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.967 11:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.967 11:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:57.226 11:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:30:57.226 11:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:57.485 [2024-05-15 11:24:15.952692] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.485 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.744 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:57.744 "name": "Existed_Raid", 00:30:57.744 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:57.744 "strip_size_kb": 64, 00:30:57.744 "state": "configuring", 00:30:57.744 "raid_level": "raid0", 00:30:57.744 "superblock": true, 00:30:57.744 "num_base_bdevs": 4, 00:30:57.744 "num_base_bdevs_discovered": 2, 00:30:57.744 "num_base_bdevs_operational": 4, 00:30:57.744 "base_bdevs_list": [ 00:30:57.744 { 00:30:57.744 "name": null, 00:30:57.744 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:57.744 "is_configured": false, 00:30:57.744 "data_offset": 2048, 00:30:57.744 "data_size": 63488 00:30:57.744 }, 00:30:57.744 { 00:30:57.744 "name": null, 00:30:57.744 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:57.744 "is_configured": false, 00:30:57.744 "data_offset": 2048, 00:30:57.744 "data_size": 63488 00:30:57.744 }, 00:30:57.744 { 00:30:57.744 "name": "BaseBdev3", 00:30:57.744 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:57.744 "is_configured": true, 00:30:57.744 "data_offset": 2048, 00:30:57.744 "data_size": 63488 00:30:57.744 }, 00:30:57.744 { 00:30:57.744 "name": "BaseBdev4", 00:30:57.744 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:57.744 "is_configured": true, 00:30:57.744 "data_offset": 2048, 00:30:57.744 "data_size": 63488 00:30:57.744 } 00:30:57.744 ] 00:30:57.744 }' 00:30:57.744 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:57.744 11:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.679 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.679 11:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:58.679 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:30:58.679 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:58.938 [2024-05-15 11:24:17.397751] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.938 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.196 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:59.196 "name": "Existed_Raid", 00:30:59.196 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:30:59.196 "strip_size_kb": 64, 00:30:59.196 "state": "configuring", 00:30:59.196 "raid_level": "raid0", 00:30:59.196 "superblock": true, 00:30:59.196 "num_base_bdevs": 4, 00:30:59.196 "num_base_bdevs_discovered": 3, 00:30:59.196 "num_base_bdevs_operational": 4, 00:30:59.196 "base_bdevs_list": [ 00:30:59.196 { 00:30:59.196 "name": null, 00:30:59.196 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:30:59.196 "is_configured": false, 00:30:59.196 "data_offset": 2048, 00:30:59.196 "data_size": 63488 00:30:59.196 }, 00:30:59.196 { 00:30:59.196 "name": "BaseBdev2", 00:30:59.196 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:30:59.196 "is_configured": true, 00:30:59.196 "data_offset": 2048, 00:30:59.196 "data_size": 63488 00:30:59.196 }, 00:30:59.196 { 00:30:59.196 "name": "BaseBdev3", 00:30:59.196 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:30:59.196 "is_configured": true, 00:30:59.196 "data_offset": 2048, 00:30:59.196 "data_size": 63488 00:30:59.196 }, 00:30:59.196 { 00:30:59.196 "name": "BaseBdev4", 00:30:59.196 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:30:59.196 "is_configured": true, 00:30:59.196 "data_offset": 2048, 00:30:59.196 "data_size": 63488 00:30:59.196 } 00:30:59.196 ] 00:30:59.196 }' 00:30:59.196 11:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:59.196 11:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.762 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.762 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:00.021 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:31:00.021 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.021 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:00.279 11:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7be0562d-9e7e-4861-808a-4a589f5537a7 00:31:00.537 [2024-05-15 11:24:18.997672] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:00.537 NewBaseBdev 00:31:00.537 [2024-05-15 11:24:18.998083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:31:00.537 [2024-05-15 11:24:18.998121] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:31:00.537 [2024-05-15 11:24:18.998263] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:00.537 [2024-05-15 11:24:18.998629] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:31:00.537 [2024-05-15 11:24:18.998670] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:31:00.537 [2024-05-15 11:24:18.998876] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:00.537 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:00.796 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:00.796 [ 00:31:00.796 { 00:31:00.796 "name": "NewBaseBdev", 00:31:00.796 "aliases": [ 00:31:00.796 "7be0562d-9e7e-4861-808a-4a589f5537a7" 00:31:00.796 ], 00:31:00.796 "product_name": "Malloc disk", 00:31:00.796 "block_size": 512, 00:31:00.796 "num_blocks": 65536, 00:31:00.796 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:31:00.796 "assigned_rate_limits": { 00:31:00.796 "rw_ios_per_sec": 0, 00:31:00.796 "rw_mbytes_per_sec": 0, 00:31:00.796 "r_mbytes_per_sec": 0, 00:31:00.796 "w_mbytes_per_sec": 0 00:31:00.796 }, 00:31:00.796 "claimed": true, 00:31:00.796 "claim_type": "exclusive_write", 00:31:00.796 "zoned": false, 00:31:00.796 "supported_io_types": { 00:31:00.796 "read": true, 00:31:00.796 "write": true, 00:31:00.796 "unmap": true, 00:31:00.796 "write_zeroes": true, 00:31:00.796 "flush": true, 00:31:00.796 "reset": true, 00:31:00.796 "compare": false, 00:31:00.796 "compare_and_write": false, 00:31:00.796 "abort": true, 00:31:00.796 "nvme_admin": false, 00:31:00.796 "nvme_io": false 00:31:00.796 }, 00:31:00.796 "memory_domains": [ 00:31:00.796 { 00:31:00.796 "dma_device_id": "system", 00:31:00.796 "dma_device_type": 1 00:31:00.796 }, 00:31:00.796 { 00:31:00.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:00.796 "dma_device_type": 2 00:31:00.796 } 00:31:00.796 ], 00:31:00.796 "driver_specific": {} 00:31:00.796 } 00:31:00.796 ] 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:01.055 "name": "Existed_Raid", 00:31:01.055 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:31:01.055 "strip_size_kb": 64, 00:31:01.055 "state": "online", 00:31:01.055 "raid_level": "raid0", 00:31:01.055 "superblock": true, 00:31:01.055 "num_base_bdevs": 4, 00:31:01.055 "num_base_bdevs_discovered": 4, 00:31:01.055 "num_base_bdevs_operational": 4, 00:31:01.055 "base_bdevs_list": [ 00:31:01.055 { 00:31:01.055 "name": "NewBaseBdev", 00:31:01.055 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:31:01.055 "is_configured": true, 00:31:01.055 "data_offset": 2048, 00:31:01.055 "data_size": 63488 00:31:01.055 }, 00:31:01.055 { 00:31:01.055 "name": "BaseBdev2", 00:31:01.055 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:31:01.055 "is_configured": true, 00:31:01.055 "data_offset": 2048, 00:31:01.055 "data_size": 63488 00:31:01.055 }, 00:31:01.055 { 00:31:01.055 "name": "BaseBdev3", 00:31:01.055 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:31:01.055 "is_configured": true, 00:31:01.055 "data_offset": 2048, 00:31:01.055 "data_size": 63488 00:31:01.055 }, 00:31:01.055 { 00:31:01.055 "name": "BaseBdev4", 00:31:01.055 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:31:01.055 "is_configured": true, 00:31:01.055 "data_offset": 2048, 00:31:01.055 "data_size": 63488 00:31:01.055 } 00:31:01.055 ] 00:31:01.055 }' 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:01.055 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:31:01.991 [2024-05-15 11:24:20.586160] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.991 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:31:01.991 "name": "Existed_Raid", 00:31:01.991 "aliases": [ 00:31:01.991 "c162f076-ca8d-4134-bb45-1b79ffddfb65" 00:31:01.991 ], 00:31:01.991 "product_name": "Raid Volume", 00:31:01.991 "block_size": 512, 00:31:01.991 "num_blocks": 253952, 00:31:01.991 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:31:01.991 "assigned_rate_limits": { 00:31:01.991 "rw_ios_per_sec": 0, 00:31:01.991 "rw_mbytes_per_sec": 0, 00:31:01.991 "r_mbytes_per_sec": 0, 00:31:01.991 "w_mbytes_per_sec": 0 00:31:01.991 }, 00:31:01.991 "claimed": false, 00:31:01.991 "zoned": false, 00:31:01.991 "supported_io_types": { 00:31:01.991 "read": true, 00:31:01.991 "write": true, 00:31:01.991 "unmap": true, 00:31:01.991 "write_zeroes": true, 00:31:01.991 "flush": true, 00:31:01.991 "reset": true, 00:31:01.991 "compare": false, 00:31:01.991 "compare_and_write": false, 00:31:01.991 "abort": false, 00:31:01.991 "nvme_admin": false, 00:31:01.991 "nvme_io": false 00:31:01.991 }, 00:31:01.991 "memory_domains": [ 00:31:01.991 { 00:31:01.991 "dma_device_id": "system", 00:31:01.991 "dma_device_type": 1 00:31:01.991 }, 00:31:01.991 { 00:31:01.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.991 "dma_device_type": 2 00:31:01.991 }, 00:31:01.991 { 00:31:01.991 "dma_device_id": "system", 00:31:01.991 "dma_device_type": 1 00:31:01.991 }, 00:31:01.992 { 00:31:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.992 "dma_device_type": 2 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "dma_device_id": "system", 00:31:01.992 "dma_device_type": 1 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.992 "dma_device_type": 2 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "dma_device_id": "system", 00:31:01.992 "dma_device_type": 1 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.992 "dma_device_type": 2 00:31:01.992 } 00:31:01.992 ], 00:31:01.992 "driver_specific": { 00:31:01.992 "raid": { 00:31:01.992 "uuid": "c162f076-ca8d-4134-bb45-1b79ffddfb65", 00:31:01.992 "strip_size_kb": 64, 00:31:01.992 "state": "online", 00:31:01.992 "raid_level": "raid0", 00:31:01.992 "superblock": true, 00:31:01.992 "num_base_bdevs": 4, 00:31:01.992 "num_base_bdevs_discovered": 4, 00:31:01.992 "num_base_bdevs_operational": 4, 00:31:01.992 "base_bdevs_list": [ 00:31:01.992 { 00:31:01.992 "name": "NewBaseBdev", 00:31:01.992 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:31:01.992 "is_configured": true, 00:31:01.992 "data_offset": 2048, 00:31:01.992 "data_size": 63488 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "name": "BaseBdev2", 00:31:01.992 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:31:01.992 "is_configured": true, 00:31:01.992 "data_offset": 2048, 00:31:01.992 "data_size": 63488 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "name": "BaseBdev3", 00:31:01.992 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:31:01.992 "is_configured": true, 00:31:01.992 "data_offset": 2048, 00:31:01.992 "data_size": 63488 00:31:01.992 }, 00:31:01.992 { 00:31:01.992 "name": "BaseBdev4", 00:31:01.992 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:31:01.992 "is_configured": true, 00:31:01.992 "data_offset": 2048, 00:31:01.992 "data_size": 63488 00:31:01.992 } 00:31:01.992 ] 00:31:01.992 } 00:31:01.992 } 00:31:01.992 }' 00:31:01.992 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:02.250 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:31:02.250 BaseBdev2 00:31:02.250 BaseBdev3 00:31:02.250 BaseBdev4' 00:31:02.250 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:02.250 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:02.250 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:02.523 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:02.523 "name": "NewBaseBdev", 00:31:02.523 "aliases": [ 00:31:02.523 "7be0562d-9e7e-4861-808a-4a589f5537a7" 00:31:02.523 ], 00:31:02.523 "product_name": "Malloc disk", 00:31:02.523 "block_size": 512, 00:31:02.523 "num_blocks": 65536, 00:31:02.523 "uuid": "7be0562d-9e7e-4861-808a-4a589f5537a7", 00:31:02.523 "assigned_rate_limits": { 00:31:02.523 "rw_ios_per_sec": 0, 00:31:02.523 "rw_mbytes_per_sec": 0, 00:31:02.523 "r_mbytes_per_sec": 0, 00:31:02.523 "w_mbytes_per_sec": 0 00:31:02.523 }, 00:31:02.523 "claimed": true, 00:31:02.523 "claim_type": "exclusive_write", 00:31:02.523 "zoned": false, 00:31:02.523 "supported_io_types": { 00:31:02.523 "read": true, 00:31:02.523 "write": true, 00:31:02.523 "unmap": true, 00:31:02.523 "write_zeroes": true, 00:31:02.523 "flush": true, 00:31:02.523 "reset": true, 00:31:02.523 "compare": false, 00:31:02.523 "compare_and_write": false, 00:31:02.523 "abort": true, 00:31:02.523 "nvme_admin": false, 00:31:02.523 "nvme_io": false 00:31:02.523 }, 00:31:02.523 "memory_domains": [ 00:31:02.523 { 00:31:02.523 "dma_device_id": "system", 00:31:02.523 "dma_device_type": 1 00:31:02.523 }, 00:31:02.523 { 00:31:02.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:02.523 "dma_device_type": 2 00:31:02.523 } 00:31:02.523 ], 00:31:02.523 "driver_specific": {} 00:31:02.523 }' 00:31:02.523 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:02.523 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:02.523 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:02.523 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:02.523 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:02.523 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:02.523 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:02.782 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:03.041 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:03.041 "name": "BaseBdev2", 00:31:03.041 "aliases": [ 00:31:03.041 "604c5683-dec1-45b3-ac00-20592dcb625a" 00:31:03.041 ], 00:31:03.041 "product_name": "Malloc disk", 00:31:03.041 "block_size": 512, 00:31:03.041 "num_blocks": 65536, 00:31:03.041 "uuid": "604c5683-dec1-45b3-ac00-20592dcb625a", 00:31:03.041 "assigned_rate_limits": { 00:31:03.041 "rw_ios_per_sec": 0, 00:31:03.041 "rw_mbytes_per_sec": 0, 00:31:03.041 "r_mbytes_per_sec": 0, 00:31:03.041 "w_mbytes_per_sec": 0 00:31:03.041 }, 00:31:03.041 "claimed": true, 00:31:03.041 "claim_type": "exclusive_write", 00:31:03.041 "zoned": false, 00:31:03.041 "supported_io_types": { 00:31:03.041 "read": true, 00:31:03.041 "write": true, 00:31:03.041 "unmap": true, 00:31:03.041 "write_zeroes": true, 00:31:03.041 "flush": true, 00:31:03.041 "reset": true, 00:31:03.041 "compare": false, 00:31:03.041 "compare_and_write": false, 00:31:03.041 "abort": true, 00:31:03.041 "nvme_admin": false, 00:31:03.041 "nvme_io": false 00:31:03.041 }, 00:31:03.041 "memory_domains": [ 00:31:03.041 { 00:31:03.041 "dma_device_id": "system", 00:31:03.041 "dma_device_type": 1 00:31:03.041 }, 00:31:03.041 { 00:31:03.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.041 "dma_device_type": 2 00:31:03.041 } 00:31:03.041 ], 00:31:03.041 "driver_specific": {} 00:31:03.041 }' 00:31:03.041 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:03.041 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:03.300 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:03.559 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:03.559 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:03.559 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:03.559 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:03.559 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:03.818 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:03.818 "name": "BaseBdev3", 00:31:03.818 "aliases": [ 00:31:03.818 "16687581-704a-4412-be22-9b90faf7163f" 00:31:03.818 ], 00:31:03.818 "product_name": "Malloc disk", 00:31:03.818 "block_size": 512, 00:31:03.818 "num_blocks": 65536, 00:31:03.818 "uuid": "16687581-704a-4412-be22-9b90faf7163f", 00:31:03.818 "assigned_rate_limits": { 00:31:03.818 "rw_ios_per_sec": 0, 00:31:03.818 "rw_mbytes_per_sec": 0, 00:31:03.818 "r_mbytes_per_sec": 0, 00:31:03.818 "w_mbytes_per_sec": 0 00:31:03.818 }, 00:31:03.818 "claimed": true, 00:31:03.818 "claim_type": "exclusive_write", 00:31:03.818 "zoned": false, 00:31:03.818 "supported_io_types": { 00:31:03.818 "read": true, 00:31:03.818 "write": true, 00:31:03.818 "unmap": true, 00:31:03.818 "write_zeroes": true, 00:31:03.818 "flush": true, 00:31:03.818 "reset": true, 00:31:03.818 "compare": false, 00:31:03.818 "compare_and_write": false, 00:31:03.818 "abort": true, 00:31:03.818 "nvme_admin": false, 00:31:03.818 "nvme_io": false 00:31:03.818 }, 00:31:03.818 "memory_domains": [ 00:31:03.818 { 00:31:03.818 "dma_device_id": "system", 00:31:03.818 "dma_device_type": 1 00:31:03.818 }, 00:31:03.818 { 00:31:03.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.818 "dma_device_type": 2 00:31:03.818 } 00:31:03.818 ], 00:31:03.818 "driver_specific": {} 00:31:03.818 }' 00:31:03.818 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:03.818 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:03.818 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:03.818 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:04.077 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:04.336 "name": "BaseBdev4", 00:31:04.336 "aliases": [ 00:31:04.336 "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110" 00:31:04.336 ], 00:31:04.336 "product_name": "Malloc disk", 00:31:04.336 "block_size": 512, 00:31:04.336 "num_blocks": 65536, 00:31:04.336 "uuid": "8e8fdae5-b91f-4d58-a037-3a1fb7f8c110", 00:31:04.336 "assigned_rate_limits": { 00:31:04.336 "rw_ios_per_sec": 0, 00:31:04.336 "rw_mbytes_per_sec": 0, 00:31:04.336 "r_mbytes_per_sec": 0, 00:31:04.336 "w_mbytes_per_sec": 0 00:31:04.336 }, 00:31:04.336 "claimed": true, 00:31:04.336 "claim_type": "exclusive_write", 00:31:04.336 "zoned": false, 00:31:04.336 "supported_io_types": { 00:31:04.336 "read": true, 00:31:04.336 "write": true, 00:31:04.336 "unmap": true, 00:31:04.336 "write_zeroes": true, 00:31:04.336 "flush": true, 00:31:04.336 "reset": true, 00:31:04.336 "compare": false, 00:31:04.336 "compare_and_write": false, 00:31:04.336 "abort": true, 00:31:04.336 "nvme_admin": false, 00:31:04.336 "nvme_io": false 00:31:04.336 }, 00:31:04.336 "memory_domains": [ 00:31:04.336 { 00:31:04.336 "dma_device_id": "system", 00:31:04.336 "dma_device_type": 1 00:31:04.336 }, 00:31:04.336 { 00:31:04.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.336 "dma_device_type": 2 00:31:04.336 } 00:31:04.336 ], 00:31:04.336 "driver_specific": {} 00:31:04.336 }' 00:31:04.336 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.594 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:04.853 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:05.112 [2024-05-15 11:24:23.682560] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:05.112 [2024-05-15 11:24:23.682597] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:05.112 [2024-05-15 11:24:23.682664] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:05.112 [2024-05-15 11:24:23.682708] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:05.112 [2024-05-15 11:24:23.682719] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 65601 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 65601 ']' 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 65601 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65601 00:31:05.112 killing process with pid 65601 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65601' 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 65601 00:31:05.112 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 65601 00:31:05.112 [2024-05-15 11:24:23.727513] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:05.679 [2024-05-15 11:24:24.035453] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:07.065 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:31:07.065 00:31:07.065 real 0m34.621s 00:31:07.065 user 1m5.101s 00:31:07.065 sys 0m3.547s 00:31:07.065 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:07.065 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.065 ************************************ 00:31:07.065 END TEST raid_state_function_test_sb 00:31:07.065 ************************************ 00:31:07.065 11:24:25 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:31:07.065 11:24:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:31:07.065 11:24:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:07.065 11:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:07.065 ************************************ 00:31:07.065 START TEST raid_superblock_test 00:31:07.065 ************************************ 00:31:07.065 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66715 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66715 /var/tmp/spdk-raid.sock 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 66715 ']' 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:07.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:07.066 11:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.066 [2024-05-15 11:24:25.474423] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:31:07.066 [2024-05-15 11:24:25.474626] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66715 ] 00:31:07.066 [2024-05-15 11:24:25.628644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.324 [2024-05-15 11:24:25.841912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.584 [2024-05-15 11:24:26.041775] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:07.842 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:07.843 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:08.101 malloc1 00:31:08.102 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:08.102 [2024-05-15 11:24:26.730906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:08.102 [2024-05-15 11:24:26.731003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:08.102 [2024-05-15 11:24:26.731057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:31:08.102 [2024-05-15 11:24:26.731097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:08.102 [2024-05-15 11:24:26.733028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:08.102 [2024-05-15 11:24:26.733065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:08.361 pt1 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:08.361 malloc2 00:31:08.361 11:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:08.619 [2024-05-15 11:24:27.126774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:08.619 [2024-05-15 11:24:27.127172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:08.619 [2024-05-15 11:24:27.127237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:31:08.619 [2024-05-15 11:24:27.127279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:08.619 pt2 00:31:08.619 [2024-05-15 11:24:27.129144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:08.619 [2024-05-15 11:24:27.129186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:08.619 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:31:08.878 malloc3 00:31:08.878 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:09.137 [2024-05-15 11:24:27.577176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:09.137 [2024-05-15 11:24:27.577291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.137 [2024-05-15 11:24:27.577341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:31:09.137 [2024-05-15 11:24:27.577388] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.137 [2024-05-15 11:24:27.579074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.137 [2024-05-15 11:24:27.579127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:09.137 pt3 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:09.137 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:31:09.410 malloc4 00:31:09.410 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:09.410 [2024-05-15 11:24:27.978373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:09.410 [2024-05-15 11:24:27.978485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.410 [2024-05-15 11:24:27.978534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:31:09.410 [2024-05-15 11:24:27.978584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.410 [2024-05-15 11:24:27.980494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.410 [2024-05-15 11:24:27.980538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:09.410 pt4 00:31:09.410 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:09.410 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:09.410 11:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:31:09.669 [2024-05-15 11:24:28.190456] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:09.669 [2024-05-15 11:24:28.192111] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:09.669 [2024-05-15 11:24:28.192160] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:09.669 [2024-05-15 11:24:28.192215] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:09.669 [2024-05-15 11:24:28.192340] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:31:09.669 [2024-05-15 11:24:28.192368] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:31:09.669 [2024-05-15 11:24:28.192484] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:31:09.669 [2024-05-15 11:24:28.192718] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:31:09.669 [2024-05-15 11:24:28.192731] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:31:09.669 [2024-05-15 11:24:28.192853] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.669 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.946 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:09.947 "name": "raid_bdev1", 00:31:09.947 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:09.947 "strip_size_kb": 64, 00:31:09.947 "state": "online", 00:31:09.947 "raid_level": "raid0", 00:31:09.947 "superblock": true, 00:31:09.947 "num_base_bdevs": 4, 00:31:09.947 "num_base_bdevs_discovered": 4, 00:31:09.947 "num_base_bdevs_operational": 4, 00:31:09.947 "base_bdevs_list": [ 00:31:09.947 { 00:31:09.947 "name": "pt1", 00:31:09.947 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": "pt2", 00:31:09.947 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": "pt3", 00:31:09.947 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": "pt4", 00:31:09.947 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 } 00:31:09.947 ] 00:31:09.947 }' 00:31:09.947 11:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:09.947 11:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:31:10.513 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:10.771 [2024-05-15 11:24:29.298806] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:10.771 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:31:10.771 "name": "raid_bdev1", 00:31:10.771 "aliases": [ 00:31:10.771 "2762e34e-6abf-4299-a065-e43b11f85444" 00:31:10.771 ], 00:31:10.771 "product_name": "Raid Volume", 00:31:10.771 "block_size": 512, 00:31:10.771 "num_blocks": 253952, 00:31:10.772 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:10.772 "assigned_rate_limits": { 00:31:10.772 "rw_ios_per_sec": 0, 00:31:10.772 "rw_mbytes_per_sec": 0, 00:31:10.772 "r_mbytes_per_sec": 0, 00:31:10.772 "w_mbytes_per_sec": 0 00:31:10.772 }, 00:31:10.772 "claimed": false, 00:31:10.772 "zoned": false, 00:31:10.772 "supported_io_types": { 00:31:10.772 "read": true, 00:31:10.772 "write": true, 00:31:10.772 "unmap": true, 00:31:10.772 "write_zeroes": true, 00:31:10.772 "flush": true, 00:31:10.772 "reset": true, 00:31:10.772 "compare": false, 00:31:10.772 "compare_and_write": false, 00:31:10.772 "abort": false, 00:31:10.772 "nvme_admin": false, 00:31:10.772 "nvme_io": false 00:31:10.772 }, 00:31:10.772 "memory_domains": [ 00:31:10.772 { 00:31:10.772 "dma_device_id": "system", 00:31:10.772 "dma_device_type": 1 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.772 "dma_device_type": 2 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "system", 00:31:10.772 "dma_device_type": 1 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.772 "dma_device_type": 2 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "system", 00:31:10.772 "dma_device_type": 1 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.772 "dma_device_type": 2 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "system", 00:31:10.772 "dma_device_type": 1 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.772 "dma_device_type": 2 00:31:10.772 } 00:31:10.772 ], 00:31:10.772 "driver_specific": { 00:31:10.772 "raid": { 00:31:10.772 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:10.772 "strip_size_kb": 64, 00:31:10.772 "state": "online", 00:31:10.772 "raid_level": "raid0", 00:31:10.772 "superblock": true, 00:31:10.772 "num_base_bdevs": 4, 00:31:10.772 "num_base_bdevs_discovered": 4, 00:31:10.772 "num_base_bdevs_operational": 4, 00:31:10.772 "base_bdevs_list": [ 00:31:10.772 { 00:31:10.772 "name": "pt1", 00:31:10.772 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:10.772 "is_configured": true, 00:31:10.772 "data_offset": 2048, 00:31:10.772 "data_size": 63488 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "name": "pt2", 00:31:10.772 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:10.772 "is_configured": true, 00:31:10.772 "data_offset": 2048, 00:31:10.772 "data_size": 63488 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "name": "pt3", 00:31:10.772 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:10.772 "is_configured": true, 00:31:10.772 "data_offset": 2048, 00:31:10.772 "data_size": 63488 00:31:10.772 }, 00:31:10.772 { 00:31:10.772 "name": "pt4", 00:31:10.772 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:10.772 "is_configured": true, 00:31:10.772 "data_offset": 2048, 00:31:10.772 "data_size": 63488 00:31:10.772 } 00:31:10.772 ] 00:31:10.772 } 00:31:10.772 } 00:31:10.772 }' 00:31:10.772 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:10.772 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:31:10.772 pt2 00:31:10.772 pt3 00:31:10.772 pt4' 00:31:10.772 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:10.772 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:10.772 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:11.030 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:11.030 "name": "pt1", 00:31:11.030 "aliases": [ 00:31:11.030 "b3296780-9788-5f56-abcd-e44c86f6496b" 00:31:11.030 ], 00:31:11.030 "product_name": "passthru", 00:31:11.030 "block_size": 512, 00:31:11.030 "num_blocks": 65536, 00:31:11.030 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:11.030 "assigned_rate_limits": { 00:31:11.030 "rw_ios_per_sec": 0, 00:31:11.030 "rw_mbytes_per_sec": 0, 00:31:11.030 "r_mbytes_per_sec": 0, 00:31:11.030 "w_mbytes_per_sec": 0 00:31:11.030 }, 00:31:11.030 "claimed": true, 00:31:11.030 "claim_type": "exclusive_write", 00:31:11.030 "zoned": false, 00:31:11.030 "supported_io_types": { 00:31:11.030 "read": true, 00:31:11.031 "write": true, 00:31:11.031 "unmap": true, 00:31:11.031 "write_zeroes": true, 00:31:11.031 "flush": true, 00:31:11.031 "reset": true, 00:31:11.031 "compare": false, 00:31:11.031 "compare_and_write": false, 00:31:11.031 "abort": true, 00:31:11.031 "nvme_admin": false, 00:31:11.031 "nvme_io": false 00:31:11.031 }, 00:31:11.031 "memory_domains": [ 00:31:11.031 { 00:31:11.031 "dma_device_id": "system", 00:31:11.031 "dma_device_type": 1 00:31:11.031 }, 00:31:11.031 { 00:31:11.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.031 "dma_device_type": 2 00:31:11.031 } 00:31:11.031 ], 00:31:11.031 "driver_specific": { 00:31:11.031 "passthru": { 00:31:11.031 "name": "pt1", 00:31:11.031 "base_bdev_name": "malloc1" 00:31:11.031 } 00:31:11.031 } 00:31:11.031 }' 00:31:11.031 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:11.031 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:11.288 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:11.546 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:11.546 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:11.546 11:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:11.546 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:11.546 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:11.546 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:11.546 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:11.804 "name": "pt2", 00:31:11.804 "aliases": [ 00:31:11.804 "3d677604-1317-5544-ad1d-1445bd5f7236" 00:31:11.804 ], 00:31:11.804 "product_name": "passthru", 00:31:11.804 "block_size": 512, 00:31:11.804 "num_blocks": 65536, 00:31:11.804 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:11.804 "assigned_rate_limits": { 00:31:11.804 "rw_ios_per_sec": 0, 00:31:11.804 "rw_mbytes_per_sec": 0, 00:31:11.804 "r_mbytes_per_sec": 0, 00:31:11.804 "w_mbytes_per_sec": 0 00:31:11.804 }, 00:31:11.804 "claimed": true, 00:31:11.804 "claim_type": "exclusive_write", 00:31:11.804 "zoned": false, 00:31:11.804 "supported_io_types": { 00:31:11.804 "read": true, 00:31:11.804 "write": true, 00:31:11.804 "unmap": true, 00:31:11.804 "write_zeroes": true, 00:31:11.804 "flush": true, 00:31:11.804 "reset": true, 00:31:11.804 "compare": false, 00:31:11.804 "compare_and_write": false, 00:31:11.804 "abort": true, 00:31:11.804 "nvme_admin": false, 00:31:11.804 "nvme_io": false 00:31:11.804 }, 00:31:11.804 "memory_domains": [ 00:31:11.804 { 00:31:11.804 "dma_device_id": "system", 00:31:11.804 "dma_device_type": 1 00:31:11.804 }, 00:31:11.804 { 00:31:11.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.804 "dma_device_type": 2 00:31:11.804 } 00:31:11.804 ], 00:31:11.804 "driver_specific": { 00:31:11.804 "passthru": { 00:31:11.804 "name": "pt2", 00:31:11.804 "base_bdev_name": "malloc2" 00:31:11.804 } 00:31:11.804 } 00:31:11.804 }' 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:11.804 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:12.063 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:12.321 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:12.321 "name": "pt3", 00:31:12.321 "aliases": [ 00:31:12.321 "8870364f-597e-5b59-8988-85cd53f947d3" 00:31:12.321 ], 00:31:12.321 "product_name": "passthru", 00:31:12.321 "block_size": 512, 00:31:12.321 "num_blocks": 65536, 00:31:12.321 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:12.321 "assigned_rate_limits": { 00:31:12.321 "rw_ios_per_sec": 0, 00:31:12.321 "rw_mbytes_per_sec": 0, 00:31:12.321 "r_mbytes_per_sec": 0, 00:31:12.321 "w_mbytes_per_sec": 0 00:31:12.321 }, 00:31:12.321 "claimed": true, 00:31:12.321 "claim_type": "exclusive_write", 00:31:12.321 "zoned": false, 00:31:12.321 "supported_io_types": { 00:31:12.321 "read": true, 00:31:12.321 "write": true, 00:31:12.321 "unmap": true, 00:31:12.321 "write_zeroes": true, 00:31:12.321 "flush": true, 00:31:12.321 "reset": true, 00:31:12.321 "compare": false, 00:31:12.321 "compare_and_write": false, 00:31:12.321 "abort": true, 00:31:12.321 "nvme_admin": false, 00:31:12.321 "nvme_io": false 00:31:12.321 }, 00:31:12.321 "memory_domains": [ 00:31:12.321 { 00:31:12.321 "dma_device_id": "system", 00:31:12.321 "dma_device_type": 1 00:31:12.321 }, 00:31:12.321 { 00:31:12.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.321 "dma_device_type": 2 00:31:12.321 } 00:31:12.321 ], 00:31:12.321 "driver_specific": { 00:31:12.321 "passthru": { 00:31:12.321 "name": "pt3", 00:31:12.321 "base_bdev_name": "malloc3" 00:31:12.321 } 00:31:12.321 } 00:31:12.321 }' 00:31:12.321 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:12.579 11:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:12.579 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:12.579 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:12.579 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:12.579 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:12.579 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:31:12.837 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:13.145 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:13.145 "name": "pt4", 00:31:13.145 "aliases": [ 00:31:13.145 "c0365156-cba4-54f8-ad0b-f5418e3b881b" 00:31:13.145 ], 00:31:13.145 "product_name": "passthru", 00:31:13.145 "block_size": 512, 00:31:13.145 "num_blocks": 65536, 00:31:13.145 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:13.145 "assigned_rate_limits": { 00:31:13.145 "rw_ios_per_sec": 0, 00:31:13.145 "rw_mbytes_per_sec": 0, 00:31:13.145 "r_mbytes_per_sec": 0, 00:31:13.145 "w_mbytes_per_sec": 0 00:31:13.145 }, 00:31:13.145 "claimed": true, 00:31:13.145 "claim_type": "exclusive_write", 00:31:13.145 "zoned": false, 00:31:13.145 "supported_io_types": { 00:31:13.145 "read": true, 00:31:13.145 "write": true, 00:31:13.145 "unmap": true, 00:31:13.145 "write_zeroes": true, 00:31:13.145 "flush": true, 00:31:13.145 "reset": true, 00:31:13.145 "compare": false, 00:31:13.145 "compare_and_write": false, 00:31:13.145 "abort": true, 00:31:13.145 "nvme_admin": false, 00:31:13.145 "nvme_io": false 00:31:13.145 }, 00:31:13.145 "memory_domains": [ 00:31:13.145 { 00:31:13.145 "dma_device_id": "system", 00:31:13.145 "dma_device_type": 1 00:31:13.145 }, 00:31:13.146 { 00:31:13.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.146 "dma_device_type": 2 00:31:13.146 } 00:31:13.146 ], 00:31:13.146 "driver_specific": { 00:31:13.146 "passthru": { 00:31:13.146 "name": "pt4", 00:31:13.146 "base_bdev_name": "malloc4" 00:31:13.146 } 00:31:13.146 } 00:31:13.146 }' 00:31:13.146 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:13.146 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:13.146 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:13.146 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:13.403 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:13.403 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:13.663 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:13.663 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:13.663 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:13.921 [2024-05-15 11:24:32.347232] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:13.921 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2762e34e-6abf-4299-a065-e43b11f85444 00:31:13.921 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2762e34e-6abf-4299-a065-e43b11f85444 ']' 00:31:13.921 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:13.921 [2024-05-15 11:24:32.551102] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:13.921 [2024-05-15 11:24:32.551143] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:13.921 [2024-05-15 11:24:32.551217] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:13.921 [2024-05-15 11:24:32.551265] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:13.921 [2024-05-15 11:24:32.551283] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.181 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:14.440 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.440 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:14.698 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.698 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:14.956 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.956 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:31:15.214 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:15.214 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:15.473 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:31:15.732 [2024-05-15 11:24:34.139340] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:15.732 [2024-05-15 11:24:34.141057] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:15.732 [2024-05-15 11:24:34.141105] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:15.732 [2024-05-15 11:24:34.141133] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:31:15.732 [2024-05-15 11:24:34.141169] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:15.732 [2024-05-15 11:24:34.141253] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:15.732 [2024-05-15 11:24:34.141287] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:15.732 [2024-05-15 11:24:34.141342] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:31:15.732 [2024-05-15 11:24:34.141367] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:15.732 [2024-05-15 11:24:34.141378] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:31:15.732 request: 00:31:15.732 { 00:31:15.732 "name": "raid_bdev1", 00:31:15.732 "raid_level": "raid0", 00:31:15.732 "base_bdevs": [ 00:31:15.732 "malloc1", 00:31:15.732 "malloc2", 00:31:15.732 "malloc3", 00:31:15.732 "malloc4" 00:31:15.732 ], 00:31:15.732 "strip_size_kb": 64, 00:31:15.732 "superblock": false, 00:31:15.732 "method": "bdev_raid_create", 00:31:15.732 "req_id": 1 00:31:15.732 } 00:31:15.732 Got JSON-RPC error response 00:31:15.732 response: 00:31:15.732 { 00:31:15.732 "code": -17, 00:31:15.732 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:15.732 } 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:15.732 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:15.733 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:15.992 [2024-05-15 11:24:34.527345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:15.992 [2024-05-15 11:24:34.527487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.992 [2024-05-15 11:24:34.527578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:31:15.992 [2024-05-15 11:24:34.527641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.992 [2024-05-15 11:24:34.529920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.992 [2024-05-15 11:24:34.529997] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:15.992 [2024-05-15 11:24:34.530132] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:31:15.992 [2024-05-15 11:24:34.530206] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:15.992 pt1 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.992 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.260 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:16.260 "name": "raid_bdev1", 00:31:16.260 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:16.260 "strip_size_kb": 64, 00:31:16.260 "state": "configuring", 00:31:16.260 "raid_level": "raid0", 00:31:16.260 "superblock": true, 00:31:16.260 "num_base_bdevs": 4, 00:31:16.260 "num_base_bdevs_discovered": 1, 00:31:16.260 "num_base_bdevs_operational": 4, 00:31:16.260 "base_bdevs_list": [ 00:31:16.260 { 00:31:16.260 "name": "pt1", 00:31:16.260 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:16.260 "is_configured": true, 00:31:16.260 "data_offset": 2048, 00:31:16.260 "data_size": 63488 00:31:16.260 }, 00:31:16.260 { 00:31:16.260 "name": null, 00:31:16.260 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:16.260 "is_configured": false, 00:31:16.260 "data_offset": 2048, 00:31:16.260 "data_size": 63488 00:31:16.260 }, 00:31:16.260 { 00:31:16.260 "name": null, 00:31:16.260 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:16.260 "is_configured": false, 00:31:16.260 "data_offset": 2048, 00:31:16.260 "data_size": 63488 00:31:16.260 }, 00:31:16.260 { 00:31:16.260 "name": null, 00:31:16.260 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:16.260 "is_configured": false, 00:31:16.260 "data_offset": 2048, 00:31:16.260 "data_size": 63488 00:31:16.260 } 00:31:16.260 ] 00:31:16.260 }' 00:31:16.260 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:16.261 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.840 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:31:16.840 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:17.099 [2024-05-15 11:24:35.631496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:17.099 [2024-05-15 11:24:35.631622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.099 [2024-05-15 11:24:35.631688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:31:17.099 [2024-05-15 11:24:35.631713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.099 [2024-05-15 11:24:35.632318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.099 [2024-05-15 11:24:35.632372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:17.099 [2024-05-15 11:24:35.632465] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:31:17.099 [2024-05-15 11:24:35.632493] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:17.099 pt2 00:31:17.099 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:17.357 [2024-05-15 11:24:35.839626] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.357 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.616 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:17.616 "name": "raid_bdev1", 00:31:17.616 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:17.616 "strip_size_kb": 64, 00:31:17.616 "state": "configuring", 00:31:17.616 "raid_level": "raid0", 00:31:17.616 "superblock": true, 00:31:17.616 "num_base_bdevs": 4, 00:31:17.616 "num_base_bdevs_discovered": 1, 00:31:17.616 "num_base_bdevs_operational": 4, 00:31:17.616 "base_bdevs_list": [ 00:31:17.616 { 00:31:17.616 "name": "pt1", 00:31:17.616 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:17.616 "is_configured": true, 00:31:17.616 "data_offset": 2048, 00:31:17.616 "data_size": 63488 00:31:17.616 }, 00:31:17.616 { 00:31:17.616 "name": null, 00:31:17.616 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:17.616 "is_configured": false, 00:31:17.616 "data_offset": 2048, 00:31:17.616 "data_size": 63488 00:31:17.616 }, 00:31:17.616 { 00:31:17.616 "name": null, 00:31:17.616 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:17.616 "is_configured": false, 00:31:17.616 "data_offset": 2048, 00:31:17.616 "data_size": 63488 00:31:17.616 }, 00:31:17.616 { 00:31:17.616 "name": null, 00:31:17.616 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:17.616 "is_configured": false, 00:31:17.616 "data_offset": 2048, 00:31:17.616 "data_size": 63488 00:31:17.616 } 00:31:17.616 ] 00:31:17.616 }' 00:31:17.616 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:17.616 11:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.184 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:18.184 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:18.184 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:18.442 [2024-05-15 11:24:37.023785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:18.442 [2024-05-15 11:24:37.023899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.442 [2024-05-15 11:24:37.023950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:31:18.442 [2024-05-15 11:24:37.023977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.442 [2024-05-15 11:24:37.024361] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.442 [2024-05-15 11:24:37.024413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:18.442 [2024-05-15 11:24:37.024499] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:31:18.442 [2024-05-15 11:24:37.024526] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:18.442 pt2 00:31:18.442 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:18.442 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:18.442 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:18.700 [2024-05-15 11:24:37.219796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:18.700 [2024-05-15 11:24:37.219949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.700 [2024-05-15 11:24:37.220010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:31:18.700 [2024-05-15 11:24:37.220051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.700 [2024-05-15 11:24:37.220458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.700 [2024-05-15 11:24:37.220504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:18.700 [2024-05-15 11:24:37.220584] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:31:18.700 [2024-05-15 11:24:37.220607] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:18.700 pt3 00:31:18.700 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:18.700 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:18.700 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:18.958 [2024-05-15 11:24:37.475828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:18.958 [2024-05-15 11:24:37.475920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.958 [2024-05-15 11:24:37.475961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:31:18.958 [2024-05-15 11:24:37.475990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.958 [2024-05-15 11:24:37.476336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.958 [2024-05-15 11:24:37.476386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:18.958 [2024-05-15 11:24:37.476474] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:31:18.958 [2024-05-15 11:24:37.476502] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:18.958 [2024-05-15 11:24:37.476592] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:31:18.958 [2024-05-15 11:24:37.476605] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:31:18.958 [2024-05-15 11:24:37.476679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:18.958 [2024-05-15 11:24:37.477122] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:31:18.958 [2024-05-15 11:24:37.477153] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:31:18.958 [2024-05-15 11:24:37.477254] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:18.958 pt4 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.958 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.217 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:19.217 "name": "raid_bdev1", 00:31:19.217 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:19.217 "strip_size_kb": 64, 00:31:19.217 "state": "online", 00:31:19.217 "raid_level": "raid0", 00:31:19.217 "superblock": true, 00:31:19.217 "num_base_bdevs": 4, 00:31:19.217 "num_base_bdevs_discovered": 4, 00:31:19.217 "num_base_bdevs_operational": 4, 00:31:19.217 "base_bdevs_list": [ 00:31:19.217 { 00:31:19.217 "name": "pt1", 00:31:19.217 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:19.217 "is_configured": true, 00:31:19.217 "data_offset": 2048, 00:31:19.217 "data_size": 63488 00:31:19.217 }, 00:31:19.217 { 00:31:19.217 "name": "pt2", 00:31:19.217 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:19.217 "is_configured": true, 00:31:19.217 "data_offset": 2048, 00:31:19.217 "data_size": 63488 00:31:19.217 }, 00:31:19.217 { 00:31:19.217 "name": "pt3", 00:31:19.217 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:19.217 "is_configured": true, 00:31:19.217 "data_offset": 2048, 00:31:19.217 "data_size": 63488 00:31:19.217 }, 00:31:19.217 { 00:31:19.217 "name": "pt4", 00:31:19.217 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:19.217 "is_configured": true, 00:31:19.217 "data_offset": 2048, 00:31:19.217 "data_size": 63488 00:31:19.217 } 00:31:19.217 ] 00:31:19.217 }' 00:31:19.217 11:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:19.217 11:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:19.784 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:31:20.043 [2024-05-15 11:24:38.628488] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.043 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:31:20.043 "name": "raid_bdev1", 00:31:20.043 "aliases": [ 00:31:20.043 "2762e34e-6abf-4299-a065-e43b11f85444" 00:31:20.043 ], 00:31:20.043 "product_name": "Raid Volume", 00:31:20.043 "block_size": 512, 00:31:20.043 "num_blocks": 253952, 00:31:20.043 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:20.043 "assigned_rate_limits": { 00:31:20.043 "rw_ios_per_sec": 0, 00:31:20.043 "rw_mbytes_per_sec": 0, 00:31:20.043 "r_mbytes_per_sec": 0, 00:31:20.043 "w_mbytes_per_sec": 0 00:31:20.043 }, 00:31:20.043 "claimed": false, 00:31:20.043 "zoned": false, 00:31:20.043 "supported_io_types": { 00:31:20.043 "read": true, 00:31:20.043 "write": true, 00:31:20.043 "unmap": true, 00:31:20.044 "write_zeroes": true, 00:31:20.044 "flush": true, 00:31:20.044 "reset": true, 00:31:20.044 "compare": false, 00:31:20.044 "compare_and_write": false, 00:31:20.044 "abort": false, 00:31:20.044 "nvme_admin": false, 00:31:20.044 "nvme_io": false 00:31:20.044 }, 00:31:20.044 "memory_domains": [ 00:31:20.044 { 00:31:20.044 "dma_device_id": "system", 00:31:20.044 "dma_device_type": 1 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.044 "dma_device_type": 2 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "system", 00:31:20.044 "dma_device_type": 1 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.044 "dma_device_type": 2 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "system", 00:31:20.044 "dma_device_type": 1 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.044 "dma_device_type": 2 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "system", 00:31:20.044 "dma_device_type": 1 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.044 "dma_device_type": 2 00:31:20.044 } 00:31:20.044 ], 00:31:20.044 "driver_specific": { 00:31:20.044 "raid": { 00:31:20.044 "uuid": "2762e34e-6abf-4299-a065-e43b11f85444", 00:31:20.044 "strip_size_kb": 64, 00:31:20.044 "state": "online", 00:31:20.044 "raid_level": "raid0", 00:31:20.044 "superblock": true, 00:31:20.044 "num_base_bdevs": 4, 00:31:20.044 "num_base_bdevs_discovered": 4, 00:31:20.044 "num_base_bdevs_operational": 4, 00:31:20.044 "base_bdevs_list": [ 00:31:20.044 { 00:31:20.044 "name": "pt1", 00:31:20.044 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:20.044 "is_configured": true, 00:31:20.044 "data_offset": 2048, 00:31:20.044 "data_size": 63488 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "name": "pt2", 00:31:20.044 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:20.044 "is_configured": true, 00:31:20.044 "data_offset": 2048, 00:31:20.044 "data_size": 63488 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "name": "pt3", 00:31:20.044 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:20.044 "is_configured": true, 00:31:20.044 "data_offset": 2048, 00:31:20.044 "data_size": 63488 00:31:20.044 }, 00:31:20.044 { 00:31:20.044 "name": "pt4", 00:31:20.044 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:20.044 "is_configured": true, 00:31:20.044 "data_offset": 2048, 00:31:20.044 "data_size": 63488 00:31:20.044 } 00:31:20.044 ] 00:31:20.044 } 00:31:20.044 } 00:31:20.044 }' 00:31:20.044 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:20.303 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:31:20.303 pt2 00:31:20.303 pt3 00:31:20.303 pt4' 00:31:20.303 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:20.303 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:20.303 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:20.562 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:20.562 "name": "pt1", 00:31:20.562 "aliases": [ 00:31:20.562 "b3296780-9788-5f56-abcd-e44c86f6496b" 00:31:20.562 ], 00:31:20.562 "product_name": "passthru", 00:31:20.562 "block_size": 512, 00:31:20.562 "num_blocks": 65536, 00:31:20.562 "uuid": "b3296780-9788-5f56-abcd-e44c86f6496b", 00:31:20.562 "assigned_rate_limits": { 00:31:20.562 "rw_ios_per_sec": 0, 00:31:20.562 "rw_mbytes_per_sec": 0, 00:31:20.562 "r_mbytes_per_sec": 0, 00:31:20.562 "w_mbytes_per_sec": 0 00:31:20.562 }, 00:31:20.562 "claimed": true, 00:31:20.562 "claim_type": "exclusive_write", 00:31:20.562 "zoned": false, 00:31:20.562 "supported_io_types": { 00:31:20.562 "read": true, 00:31:20.562 "write": true, 00:31:20.562 "unmap": true, 00:31:20.562 "write_zeroes": true, 00:31:20.562 "flush": true, 00:31:20.562 "reset": true, 00:31:20.562 "compare": false, 00:31:20.562 "compare_and_write": false, 00:31:20.562 "abort": true, 00:31:20.562 "nvme_admin": false, 00:31:20.562 "nvme_io": false 00:31:20.562 }, 00:31:20.562 "memory_domains": [ 00:31:20.562 { 00:31:20.562 "dma_device_id": "system", 00:31:20.562 "dma_device_type": 1 00:31:20.562 }, 00:31:20.562 { 00:31:20.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.562 "dma_device_type": 2 00:31:20.562 } 00:31:20.562 ], 00:31:20.562 "driver_specific": { 00:31:20.562 "passthru": { 00:31:20.562 "name": "pt1", 00:31:20.562 "base_bdev_name": "malloc1" 00:31:20.562 } 00:31:20.562 } 00:31:20.562 }' 00:31:20.562 11:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:20.562 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:20.820 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:21.076 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:21.076 "name": "pt2", 00:31:21.076 "aliases": [ 00:31:21.076 "3d677604-1317-5544-ad1d-1445bd5f7236" 00:31:21.076 ], 00:31:21.076 "product_name": "passthru", 00:31:21.076 "block_size": 512, 00:31:21.076 "num_blocks": 65536, 00:31:21.076 "uuid": "3d677604-1317-5544-ad1d-1445bd5f7236", 00:31:21.076 "assigned_rate_limits": { 00:31:21.076 "rw_ios_per_sec": 0, 00:31:21.076 "rw_mbytes_per_sec": 0, 00:31:21.076 "r_mbytes_per_sec": 0, 00:31:21.076 "w_mbytes_per_sec": 0 00:31:21.076 }, 00:31:21.076 "claimed": true, 00:31:21.076 "claim_type": "exclusive_write", 00:31:21.076 "zoned": false, 00:31:21.076 "supported_io_types": { 00:31:21.076 "read": true, 00:31:21.076 "write": true, 00:31:21.076 "unmap": true, 00:31:21.076 "write_zeroes": true, 00:31:21.076 "flush": true, 00:31:21.076 "reset": true, 00:31:21.076 "compare": false, 00:31:21.076 "compare_and_write": false, 00:31:21.076 "abort": true, 00:31:21.076 "nvme_admin": false, 00:31:21.076 "nvme_io": false 00:31:21.076 }, 00:31:21.076 "memory_domains": [ 00:31:21.076 { 00:31:21.076 "dma_device_id": "system", 00:31:21.076 "dma_device_type": 1 00:31:21.076 }, 00:31:21.076 { 00:31:21.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.076 "dma_device_type": 2 00:31:21.076 } 00:31:21.076 ], 00:31:21.076 "driver_specific": { 00:31:21.076 "passthru": { 00:31:21.076 "name": "pt2", 00:31:21.076 "base_bdev_name": "malloc2" 00:31:21.076 } 00:31:21.076 } 00:31:21.076 }' 00:31:21.076 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:21.076 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:21.335 11:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:21.591 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:21.591 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:21.591 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:21.592 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:21.592 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:21.870 "name": "pt3", 00:31:21.870 "aliases": [ 00:31:21.870 "8870364f-597e-5b59-8988-85cd53f947d3" 00:31:21.870 ], 00:31:21.870 "product_name": "passthru", 00:31:21.870 "block_size": 512, 00:31:21.870 "num_blocks": 65536, 00:31:21.870 "uuid": "8870364f-597e-5b59-8988-85cd53f947d3", 00:31:21.870 "assigned_rate_limits": { 00:31:21.870 "rw_ios_per_sec": 0, 00:31:21.870 "rw_mbytes_per_sec": 0, 00:31:21.870 "r_mbytes_per_sec": 0, 00:31:21.870 "w_mbytes_per_sec": 0 00:31:21.870 }, 00:31:21.870 "claimed": true, 00:31:21.870 "claim_type": "exclusive_write", 00:31:21.870 "zoned": false, 00:31:21.870 "supported_io_types": { 00:31:21.870 "read": true, 00:31:21.870 "write": true, 00:31:21.870 "unmap": true, 00:31:21.870 "write_zeroes": true, 00:31:21.870 "flush": true, 00:31:21.870 "reset": true, 00:31:21.870 "compare": false, 00:31:21.870 "compare_and_write": false, 00:31:21.870 "abort": true, 00:31:21.870 "nvme_admin": false, 00:31:21.870 "nvme_io": false 00:31:21.870 }, 00:31:21.870 "memory_domains": [ 00:31:21.870 { 00:31:21.870 "dma_device_id": "system", 00:31:21.870 "dma_device_type": 1 00:31:21.870 }, 00:31:21.870 { 00:31:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.870 "dma_device_type": 2 00:31:21.870 } 00:31:21.870 ], 00:31:21.870 "driver_specific": { 00:31:21.870 "passthru": { 00:31:21.870 "name": "pt3", 00:31:21.870 "base_bdev_name": "malloc3" 00:31:21.870 } 00:31:21.870 } 00:31:21.870 }' 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:21.870 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:31:22.128 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:22.385 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:22.385 "name": "pt4", 00:31:22.385 "aliases": [ 00:31:22.385 "c0365156-cba4-54f8-ad0b-f5418e3b881b" 00:31:22.385 ], 00:31:22.385 "product_name": "passthru", 00:31:22.385 "block_size": 512, 00:31:22.385 "num_blocks": 65536, 00:31:22.385 "uuid": "c0365156-cba4-54f8-ad0b-f5418e3b881b", 00:31:22.385 "assigned_rate_limits": { 00:31:22.385 "rw_ios_per_sec": 0, 00:31:22.385 "rw_mbytes_per_sec": 0, 00:31:22.385 "r_mbytes_per_sec": 0, 00:31:22.385 "w_mbytes_per_sec": 0 00:31:22.385 }, 00:31:22.385 "claimed": true, 00:31:22.385 "claim_type": "exclusive_write", 00:31:22.385 "zoned": false, 00:31:22.385 "supported_io_types": { 00:31:22.385 "read": true, 00:31:22.385 "write": true, 00:31:22.385 "unmap": true, 00:31:22.385 "write_zeroes": true, 00:31:22.385 "flush": true, 00:31:22.385 "reset": true, 00:31:22.385 "compare": false, 00:31:22.385 "compare_and_write": false, 00:31:22.385 "abort": true, 00:31:22.386 "nvme_admin": false, 00:31:22.386 "nvme_io": false 00:31:22.386 }, 00:31:22.386 "memory_domains": [ 00:31:22.386 { 00:31:22.386 "dma_device_id": "system", 00:31:22.386 "dma_device_type": 1 00:31:22.386 }, 00:31:22.386 { 00:31:22.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.386 "dma_device_type": 2 00:31:22.386 } 00:31:22.386 ], 00:31:22.386 "driver_specific": { 00:31:22.386 "passthru": { 00:31:22.386 "name": "pt4", 00:31:22.386 "base_bdev_name": "malloc4" 00:31:22.386 } 00:31:22.386 } 00:31:22.386 }' 00:31:22.386 11:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:22.642 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:22.901 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:22.901 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:22.901 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:22.901 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:23.160 [2024-05-15 11:24:41.553066] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2762e34e-6abf-4299-a065-e43b11f85444 '!=' 2762e34e-6abf-4299-a065-e43b11f85444 ']' 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 66715 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 66715 ']' 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 66715 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66715 00:31:23.160 killing process with pid 66715 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66715' 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 66715 00:31:23.160 11:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 66715 00:31:23.160 [2024-05-15 11:24:41.590192] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:23.160 [2024-05-15 11:24:41.590268] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:23.160 [2024-05-15 11:24:41.590316] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:23.160 [2024-05-15 11:24:41.590327] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:31:23.418 [2024-05-15 11:24:41.914102] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:24.791 11:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:31:24.791 00:31:24.791 real 0m17.841s 00:31:24.791 user 0m32.573s 00:31:24.791 sys 0m1.761s 00:31:24.791 11:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.791 11:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.791 ************************************ 00:31:24.791 END TEST raid_superblock_test 00:31:24.791 ************************************ 00:31:24.791 11:24:43 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:31:24.791 11:24:43 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:31:24.791 11:24:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:31:24.791 11:24:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.791 11:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:24.791 ************************************ 00:31:24.791 START TEST raid_state_function_test 00:31:24.791 ************************************ 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:31:24.791 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:31:24.792 Process raid pid: 67279 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=67279 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 67279' 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 67279 /var/tmp/spdk-raid.sock 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 67279 ']' 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:24.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.792 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.792 [2024-05-15 11:24:43.369552] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:31:24.792 [2024-05-15 11:24:43.369749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.049 [2024-05-15 11:24:43.540150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.307 [2024-05-15 11:24:43.779959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.565 [2024-05-15 11:24:43.979635] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:25.823 11:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:25.823 11:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:31:25.823 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:31:25.823 [2024-05-15 11:24:44.445688] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:25.823 [2024-05-15 11:24:44.445771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:25.823 [2024-05-15 11:24:44.445787] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:25.823 [2024-05-15 11:24:44.445986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:25.823 [2024-05-15 11:24:44.446010] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:25.823 [2024-05-15 11:24:44.446065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:25.823 [2024-05-15 11:24:44.446077] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:25.823 [2024-05-15 11:24:44.446102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:26.082 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.083 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.083 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:26.083 "name": "Existed_Raid", 00:31:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.083 "strip_size_kb": 64, 00:31:26.083 "state": "configuring", 00:31:26.083 "raid_level": "concat", 00:31:26.083 "superblock": false, 00:31:26.083 "num_base_bdevs": 4, 00:31:26.083 "num_base_bdevs_discovered": 0, 00:31:26.083 "num_base_bdevs_operational": 4, 00:31:26.083 "base_bdevs_list": [ 00:31:26.083 { 00:31:26.083 "name": "BaseBdev1", 00:31:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.083 "is_configured": false, 00:31:26.083 "data_offset": 0, 00:31:26.083 "data_size": 0 00:31:26.083 }, 00:31:26.083 { 00:31:26.083 "name": "BaseBdev2", 00:31:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.083 "is_configured": false, 00:31:26.083 "data_offset": 0, 00:31:26.083 "data_size": 0 00:31:26.083 }, 00:31:26.083 { 00:31:26.083 "name": "BaseBdev3", 00:31:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.083 "is_configured": false, 00:31:26.083 "data_offset": 0, 00:31:26.083 "data_size": 0 00:31:26.083 }, 00:31:26.083 { 00:31:26.083 "name": "BaseBdev4", 00:31:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.083 "is_configured": false, 00:31:26.083 "data_offset": 0, 00:31:26.083 "data_size": 0 00:31:26.083 } 00:31:26.083 ] 00:31:26.083 }' 00:31:26.083 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:26.083 11:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.081 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:27.081 [2024-05-15 11:24:45.553729] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:27.081 [2024-05-15 11:24:45.553777] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:31:27.081 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:31:27.340 [2024-05-15 11:24:45.745753] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:27.340 [2024-05-15 11:24:45.746050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:27.340 [2024-05-15 11:24:45.746110] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:27.340 [2024-05-15 11:24:45.746190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:27.340 [2024-05-15 11:24:45.746215] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:27.340 [2024-05-15 11:24:45.746254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:27.340 [2024-05-15 11:24:45.746274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:27.340 [2024-05-15 11:24:45.746327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:27.340 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:27.599 [2024-05-15 11:24:46.027707] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:27.599 BaseBdev1 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:27.599 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:27.857 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:27.857 [ 00:31:27.857 { 00:31:27.857 "name": "BaseBdev1", 00:31:27.857 "aliases": [ 00:31:27.857 "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3" 00:31:27.857 ], 00:31:27.857 "product_name": "Malloc disk", 00:31:27.857 "block_size": 512, 00:31:27.857 "num_blocks": 65536, 00:31:27.857 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:27.857 "assigned_rate_limits": { 00:31:27.857 "rw_ios_per_sec": 0, 00:31:27.857 "rw_mbytes_per_sec": 0, 00:31:27.857 "r_mbytes_per_sec": 0, 00:31:27.857 "w_mbytes_per_sec": 0 00:31:27.857 }, 00:31:27.857 "claimed": true, 00:31:27.857 "claim_type": "exclusive_write", 00:31:27.857 "zoned": false, 00:31:27.857 "supported_io_types": { 00:31:27.857 "read": true, 00:31:27.857 "write": true, 00:31:27.857 "unmap": true, 00:31:27.857 "write_zeroes": true, 00:31:27.857 "flush": true, 00:31:27.857 "reset": true, 00:31:27.857 "compare": false, 00:31:27.857 "compare_and_write": false, 00:31:27.857 "abort": true, 00:31:27.857 "nvme_admin": false, 00:31:27.857 "nvme_io": false 00:31:27.857 }, 00:31:27.857 "memory_domains": [ 00:31:27.857 { 00:31:27.857 "dma_device_id": "system", 00:31:27.857 "dma_device_type": 1 00:31:27.857 }, 00:31:27.857 { 00:31:27.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.857 "dma_device_type": 2 00:31:27.857 } 00:31:27.857 ], 00:31:27.857 "driver_specific": {} 00:31:27.857 } 00:31:27.857 ] 00:31:27.857 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:27.857 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.858 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.116 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:28.116 "name": "Existed_Raid", 00:31:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.116 "strip_size_kb": 64, 00:31:28.116 "state": "configuring", 00:31:28.116 "raid_level": "concat", 00:31:28.116 "superblock": false, 00:31:28.116 "num_base_bdevs": 4, 00:31:28.116 "num_base_bdevs_discovered": 1, 00:31:28.116 "num_base_bdevs_operational": 4, 00:31:28.116 "base_bdevs_list": [ 00:31:28.116 { 00:31:28.116 "name": "BaseBdev1", 00:31:28.116 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:28.116 "is_configured": true, 00:31:28.116 "data_offset": 0, 00:31:28.116 "data_size": 65536 00:31:28.116 }, 00:31:28.116 { 00:31:28.116 "name": "BaseBdev2", 00:31:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.116 "is_configured": false, 00:31:28.116 "data_offset": 0, 00:31:28.116 "data_size": 0 00:31:28.116 }, 00:31:28.116 { 00:31:28.116 "name": "BaseBdev3", 00:31:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.116 "is_configured": false, 00:31:28.116 "data_offset": 0, 00:31:28.116 "data_size": 0 00:31:28.116 }, 00:31:28.116 { 00:31:28.116 "name": "BaseBdev4", 00:31:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.116 "is_configured": false, 00:31:28.116 "data_offset": 0, 00:31:28.116 "data_size": 0 00:31:28.116 } 00:31:28.116 ] 00:31:28.116 }' 00:31:28.116 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:28.116 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.684 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:28.942 [2024-05-15 11:24:47.451926] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:28.942 [2024-05-15 11:24:47.452001] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:31:28.942 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:31:29.201 [2024-05-15 11:24:47.696019] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:29.201 [2024-05-15 11:24:47.697736] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:29.201 [2024-05-15 11:24:47.697832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:29.201 [2024-05-15 11:24:47.697857] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:29.201 [2024-05-15 11:24:47.697884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:29.201 [2024-05-15 11:24:47.697894] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:29.201 [2024-05-15 11:24:47.697910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.459 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:29.459 "name": "Existed_Raid", 00:31:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.459 "strip_size_kb": 64, 00:31:29.459 "state": "configuring", 00:31:29.459 "raid_level": "concat", 00:31:29.459 "superblock": false, 00:31:29.459 "num_base_bdevs": 4, 00:31:29.459 "num_base_bdevs_discovered": 1, 00:31:29.459 "num_base_bdevs_operational": 4, 00:31:29.459 "base_bdevs_list": [ 00:31:29.459 { 00:31:29.459 "name": "BaseBdev1", 00:31:29.459 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:29.459 "is_configured": true, 00:31:29.459 "data_offset": 0, 00:31:29.459 "data_size": 65536 00:31:29.459 }, 00:31:29.459 { 00:31:29.460 "name": "BaseBdev2", 00:31:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.460 "is_configured": false, 00:31:29.460 "data_offset": 0, 00:31:29.460 "data_size": 0 00:31:29.460 }, 00:31:29.460 { 00:31:29.460 "name": "BaseBdev3", 00:31:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.460 "is_configured": false, 00:31:29.460 "data_offset": 0, 00:31:29.460 "data_size": 0 00:31:29.460 }, 00:31:29.460 { 00:31:29.460 "name": "BaseBdev4", 00:31:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.460 "is_configured": false, 00:31:29.460 "data_offset": 0, 00:31:29.460 "data_size": 0 00:31:29.460 } 00:31:29.460 ] 00:31:29.460 }' 00:31:29.460 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:29.460 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.027 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:30.286 [2024-05-15 11:24:48.825396] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:30.286 BaseBdev2 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:30.286 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:30.545 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:30.804 [ 00:31:30.804 { 00:31:30.804 "name": "BaseBdev2", 00:31:30.804 "aliases": [ 00:31:30.804 "90cfd86e-b3de-41fd-885e-5be5c3c8b112" 00:31:30.804 ], 00:31:30.804 "product_name": "Malloc disk", 00:31:30.804 "block_size": 512, 00:31:30.804 "num_blocks": 65536, 00:31:30.804 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:30.804 "assigned_rate_limits": { 00:31:30.804 "rw_ios_per_sec": 0, 00:31:30.804 "rw_mbytes_per_sec": 0, 00:31:30.804 "r_mbytes_per_sec": 0, 00:31:30.804 "w_mbytes_per_sec": 0 00:31:30.804 }, 00:31:30.804 "claimed": true, 00:31:30.804 "claim_type": "exclusive_write", 00:31:30.804 "zoned": false, 00:31:30.804 "supported_io_types": { 00:31:30.804 "read": true, 00:31:30.804 "write": true, 00:31:30.804 "unmap": true, 00:31:30.804 "write_zeroes": true, 00:31:30.804 "flush": true, 00:31:30.804 "reset": true, 00:31:30.804 "compare": false, 00:31:30.804 "compare_and_write": false, 00:31:30.804 "abort": true, 00:31:30.804 "nvme_admin": false, 00:31:30.804 "nvme_io": false 00:31:30.804 }, 00:31:30.804 "memory_domains": [ 00:31:30.804 { 00:31:30.804 "dma_device_id": "system", 00:31:30.804 "dma_device_type": 1 00:31:30.804 }, 00:31:30.804 { 00:31:30.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:30.804 "dma_device_type": 2 00:31:30.804 } 00:31:30.804 ], 00:31:30.804 "driver_specific": {} 00:31:30.804 } 00:31:30.804 ] 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.804 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:31.063 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:31.063 "name": "Existed_Raid", 00:31:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.063 "strip_size_kb": 64, 00:31:31.063 "state": "configuring", 00:31:31.063 "raid_level": "concat", 00:31:31.063 "superblock": false, 00:31:31.063 "num_base_bdevs": 4, 00:31:31.063 "num_base_bdevs_discovered": 2, 00:31:31.063 "num_base_bdevs_operational": 4, 00:31:31.063 "base_bdevs_list": [ 00:31:31.063 { 00:31:31.063 "name": "BaseBdev1", 00:31:31.063 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:31.063 "is_configured": true, 00:31:31.063 "data_offset": 0, 00:31:31.063 "data_size": 65536 00:31:31.063 }, 00:31:31.063 { 00:31:31.063 "name": "BaseBdev2", 00:31:31.063 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:31.063 "is_configured": true, 00:31:31.063 "data_offset": 0, 00:31:31.063 "data_size": 65536 00:31:31.063 }, 00:31:31.063 { 00:31:31.063 "name": "BaseBdev3", 00:31:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.063 "is_configured": false, 00:31:31.063 "data_offset": 0, 00:31:31.063 "data_size": 0 00:31:31.063 }, 00:31:31.063 { 00:31:31.063 "name": "BaseBdev4", 00:31:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.063 "is_configured": false, 00:31:31.063 "data_offset": 0, 00:31:31.063 "data_size": 0 00:31:31.063 } 00:31:31.063 ] 00:31:31.063 }' 00:31:31.063 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:31.063 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.631 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:31.890 [2024-05-15 11:24:50.386771] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:31.890 BaseBdev3 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:31.890 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:32.149 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:32.408 [ 00:31:32.408 { 00:31:32.408 "name": "BaseBdev3", 00:31:32.408 "aliases": [ 00:31:32.408 "6062931f-1d5d-4b03-beff-232c848eabdb" 00:31:32.408 ], 00:31:32.408 "product_name": "Malloc disk", 00:31:32.408 "block_size": 512, 00:31:32.408 "num_blocks": 65536, 00:31:32.408 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:32.408 "assigned_rate_limits": { 00:31:32.408 "rw_ios_per_sec": 0, 00:31:32.408 "rw_mbytes_per_sec": 0, 00:31:32.408 "r_mbytes_per_sec": 0, 00:31:32.408 "w_mbytes_per_sec": 0 00:31:32.408 }, 00:31:32.408 "claimed": true, 00:31:32.408 "claim_type": "exclusive_write", 00:31:32.408 "zoned": false, 00:31:32.408 "supported_io_types": { 00:31:32.408 "read": true, 00:31:32.408 "write": true, 00:31:32.408 "unmap": true, 00:31:32.408 "write_zeroes": true, 00:31:32.408 "flush": true, 00:31:32.408 "reset": true, 00:31:32.408 "compare": false, 00:31:32.408 "compare_and_write": false, 00:31:32.408 "abort": true, 00:31:32.408 "nvme_admin": false, 00:31:32.408 "nvme_io": false 00:31:32.408 }, 00:31:32.408 "memory_domains": [ 00:31:32.408 { 00:31:32.408 "dma_device_id": "system", 00:31:32.408 "dma_device_type": 1 00:31:32.408 }, 00:31:32.408 { 00:31:32.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.408 "dma_device_type": 2 00:31:32.408 } 00:31:32.408 ], 00:31:32.408 "driver_specific": {} 00:31:32.408 } 00:31:32.408 ] 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.408 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.667 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:32.667 "name": "Existed_Raid", 00:31:32.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.667 "strip_size_kb": 64, 00:31:32.667 "state": "configuring", 00:31:32.667 "raid_level": "concat", 00:31:32.667 "superblock": false, 00:31:32.667 "num_base_bdevs": 4, 00:31:32.667 "num_base_bdevs_discovered": 3, 00:31:32.667 "num_base_bdevs_operational": 4, 00:31:32.667 "base_bdevs_list": [ 00:31:32.667 { 00:31:32.667 "name": "BaseBdev1", 00:31:32.667 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:32.667 "is_configured": true, 00:31:32.667 "data_offset": 0, 00:31:32.667 "data_size": 65536 00:31:32.667 }, 00:31:32.667 { 00:31:32.667 "name": "BaseBdev2", 00:31:32.667 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:32.667 "is_configured": true, 00:31:32.667 "data_offset": 0, 00:31:32.667 "data_size": 65536 00:31:32.667 }, 00:31:32.667 { 00:31:32.667 "name": "BaseBdev3", 00:31:32.667 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:32.667 "is_configured": true, 00:31:32.667 "data_offset": 0, 00:31:32.667 "data_size": 65536 00:31:32.667 }, 00:31:32.667 { 00:31:32.667 "name": "BaseBdev4", 00:31:32.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.667 "is_configured": false, 00:31:32.667 "data_offset": 0, 00:31:32.667 "data_size": 0 00:31:32.667 } 00:31:32.667 ] 00:31:32.667 }' 00:31:32.667 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:32.667 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.235 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:31:33.494 [2024-05-15 11:24:52.090821] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:33.494 BaseBdev4 00:31:33.494 [2024-05-15 11:24:52.091135] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:31:33.494 [2024-05-15 11:24:52.091157] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:31:33.494 [2024-05-15 11:24:52.091285] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:31:33.494 [2024-05-15 11:24:52.091598] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:31:33.494 [2024-05-15 11:24:52.091615] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:31:33.494 [2024-05-15 11:24:52.091826] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:33.494 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:33.754 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:34.017 [ 00:31:34.018 { 00:31:34.018 "name": "BaseBdev4", 00:31:34.018 "aliases": [ 00:31:34.018 "97e4190e-0f25-474e-9c04-444ad2c9d239" 00:31:34.018 ], 00:31:34.018 "product_name": "Malloc disk", 00:31:34.018 "block_size": 512, 00:31:34.018 "num_blocks": 65536, 00:31:34.018 "uuid": "97e4190e-0f25-474e-9c04-444ad2c9d239", 00:31:34.018 "assigned_rate_limits": { 00:31:34.018 "rw_ios_per_sec": 0, 00:31:34.018 "rw_mbytes_per_sec": 0, 00:31:34.018 "r_mbytes_per_sec": 0, 00:31:34.018 "w_mbytes_per_sec": 0 00:31:34.018 }, 00:31:34.018 "claimed": true, 00:31:34.018 "claim_type": "exclusive_write", 00:31:34.018 "zoned": false, 00:31:34.018 "supported_io_types": { 00:31:34.018 "read": true, 00:31:34.018 "write": true, 00:31:34.018 "unmap": true, 00:31:34.018 "write_zeroes": true, 00:31:34.018 "flush": true, 00:31:34.018 "reset": true, 00:31:34.018 "compare": false, 00:31:34.018 "compare_and_write": false, 00:31:34.018 "abort": true, 00:31:34.018 "nvme_admin": false, 00:31:34.018 "nvme_io": false 00:31:34.018 }, 00:31:34.018 "memory_domains": [ 00:31:34.018 { 00:31:34.018 "dma_device_id": "system", 00:31:34.018 "dma_device_type": 1 00:31:34.018 }, 00:31:34.018 { 00:31:34.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:34.018 "dma_device_type": 2 00:31:34.018 } 00:31:34.018 ], 00:31:34.018 "driver_specific": {} 00:31:34.018 } 00:31:34.018 ] 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.018 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.276 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:34.276 "name": "Existed_Raid", 00:31:34.276 "uuid": "54564e65-90d9-4aad-8e8b-980b4e48b39d", 00:31:34.276 "strip_size_kb": 64, 00:31:34.276 "state": "online", 00:31:34.276 "raid_level": "concat", 00:31:34.276 "superblock": false, 00:31:34.276 "num_base_bdevs": 4, 00:31:34.276 "num_base_bdevs_discovered": 4, 00:31:34.276 "num_base_bdevs_operational": 4, 00:31:34.276 "base_bdevs_list": [ 00:31:34.276 { 00:31:34.276 "name": "BaseBdev1", 00:31:34.276 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:34.276 "is_configured": true, 00:31:34.276 "data_offset": 0, 00:31:34.276 "data_size": 65536 00:31:34.276 }, 00:31:34.276 { 00:31:34.276 "name": "BaseBdev2", 00:31:34.276 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:34.276 "is_configured": true, 00:31:34.276 "data_offset": 0, 00:31:34.276 "data_size": 65536 00:31:34.276 }, 00:31:34.276 { 00:31:34.276 "name": "BaseBdev3", 00:31:34.276 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:34.276 "is_configured": true, 00:31:34.276 "data_offset": 0, 00:31:34.276 "data_size": 65536 00:31:34.276 }, 00:31:34.276 { 00:31:34.276 "name": "BaseBdev4", 00:31:34.276 "uuid": "97e4190e-0f25-474e-9c04-444ad2c9d239", 00:31:34.276 "is_configured": true, 00:31:34.276 "data_offset": 0, 00:31:34.276 "data_size": 65536 00:31:34.276 } 00:31:34.276 ] 00:31:34.276 }' 00:31:34.276 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:34.276 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:31:35.211 [2024-05-15 11:24:53.751397] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:31:35.211 "name": "Existed_Raid", 00:31:35.211 "aliases": [ 00:31:35.211 "54564e65-90d9-4aad-8e8b-980b4e48b39d" 00:31:35.211 ], 00:31:35.211 "product_name": "Raid Volume", 00:31:35.211 "block_size": 512, 00:31:35.211 "num_blocks": 262144, 00:31:35.211 "uuid": "54564e65-90d9-4aad-8e8b-980b4e48b39d", 00:31:35.211 "assigned_rate_limits": { 00:31:35.211 "rw_ios_per_sec": 0, 00:31:35.211 "rw_mbytes_per_sec": 0, 00:31:35.211 "r_mbytes_per_sec": 0, 00:31:35.211 "w_mbytes_per_sec": 0 00:31:35.211 }, 00:31:35.211 "claimed": false, 00:31:35.211 "zoned": false, 00:31:35.211 "supported_io_types": { 00:31:35.211 "read": true, 00:31:35.211 "write": true, 00:31:35.211 "unmap": true, 00:31:35.211 "write_zeroes": true, 00:31:35.211 "flush": true, 00:31:35.211 "reset": true, 00:31:35.211 "compare": false, 00:31:35.211 "compare_and_write": false, 00:31:35.211 "abort": false, 00:31:35.211 "nvme_admin": false, 00:31:35.211 "nvme_io": false 00:31:35.211 }, 00:31:35.211 "memory_domains": [ 00:31:35.211 { 00:31:35.211 "dma_device_id": "system", 00:31:35.211 "dma_device_type": 1 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.211 "dma_device_type": 2 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "system", 00:31:35.211 "dma_device_type": 1 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.211 "dma_device_type": 2 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "system", 00:31:35.211 "dma_device_type": 1 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.211 "dma_device_type": 2 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "system", 00:31:35.211 "dma_device_type": 1 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.211 "dma_device_type": 2 00:31:35.211 } 00:31:35.211 ], 00:31:35.211 "driver_specific": { 00:31:35.211 "raid": { 00:31:35.211 "uuid": "54564e65-90d9-4aad-8e8b-980b4e48b39d", 00:31:35.211 "strip_size_kb": 64, 00:31:35.211 "state": "online", 00:31:35.211 "raid_level": "concat", 00:31:35.211 "superblock": false, 00:31:35.211 "num_base_bdevs": 4, 00:31:35.211 "num_base_bdevs_discovered": 4, 00:31:35.211 "num_base_bdevs_operational": 4, 00:31:35.211 "base_bdevs_list": [ 00:31:35.211 { 00:31:35.211 "name": "BaseBdev1", 00:31:35.211 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:35.211 "is_configured": true, 00:31:35.211 "data_offset": 0, 00:31:35.211 "data_size": 65536 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "name": "BaseBdev2", 00:31:35.211 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:35.211 "is_configured": true, 00:31:35.211 "data_offset": 0, 00:31:35.211 "data_size": 65536 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "name": "BaseBdev3", 00:31:35.211 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:35.211 "is_configured": true, 00:31:35.211 "data_offset": 0, 00:31:35.211 "data_size": 65536 00:31:35.211 }, 00:31:35.211 { 00:31:35.211 "name": "BaseBdev4", 00:31:35.211 "uuid": "97e4190e-0f25-474e-9c04-444ad2c9d239", 00:31:35.211 "is_configured": true, 00:31:35.211 "data_offset": 0, 00:31:35.211 "data_size": 65536 00:31:35.211 } 00:31:35.211 ] 00:31:35.211 } 00:31:35.211 } 00:31:35.211 }' 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:31:35.211 BaseBdev2 00:31:35.211 BaseBdev3 00:31:35.211 BaseBdev4' 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:35.211 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:35.469 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:35.469 "name": "BaseBdev1", 00:31:35.469 "aliases": [ 00:31:35.469 "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3" 00:31:35.469 ], 00:31:35.469 "product_name": "Malloc disk", 00:31:35.469 "block_size": 512, 00:31:35.469 "num_blocks": 65536, 00:31:35.469 "uuid": "4fa8267b-2ef9-4fab-afc1-3ce15c0dadd3", 00:31:35.469 "assigned_rate_limits": { 00:31:35.469 "rw_ios_per_sec": 0, 00:31:35.469 "rw_mbytes_per_sec": 0, 00:31:35.469 "r_mbytes_per_sec": 0, 00:31:35.469 "w_mbytes_per_sec": 0 00:31:35.469 }, 00:31:35.469 "claimed": true, 00:31:35.469 "claim_type": "exclusive_write", 00:31:35.469 "zoned": false, 00:31:35.469 "supported_io_types": { 00:31:35.469 "read": true, 00:31:35.469 "write": true, 00:31:35.469 "unmap": true, 00:31:35.469 "write_zeroes": true, 00:31:35.469 "flush": true, 00:31:35.469 "reset": true, 00:31:35.469 "compare": false, 00:31:35.469 "compare_and_write": false, 00:31:35.469 "abort": true, 00:31:35.469 "nvme_admin": false, 00:31:35.469 "nvme_io": false 00:31:35.469 }, 00:31:35.469 "memory_domains": [ 00:31:35.469 { 00:31:35.469 "dma_device_id": "system", 00:31:35.469 "dma_device_type": 1 00:31:35.469 }, 00:31:35.469 { 00:31:35.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.469 "dma_device_type": 2 00:31:35.469 } 00:31:35.469 ], 00:31:35.469 "driver_specific": {} 00:31:35.469 }' 00:31:35.469 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:35.469 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:35.727 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:35.986 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:35.986 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:35.986 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:35.986 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:35.986 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:36.245 "name": "BaseBdev2", 00:31:36.245 "aliases": [ 00:31:36.245 "90cfd86e-b3de-41fd-885e-5be5c3c8b112" 00:31:36.245 ], 00:31:36.245 "product_name": "Malloc disk", 00:31:36.245 "block_size": 512, 00:31:36.245 "num_blocks": 65536, 00:31:36.245 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:36.245 "assigned_rate_limits": { 00:31:36.245 "rw_ios_per_sec": 0, 00:31:36.245 "rw_mbytes_per_sec": 0, 00:31:36.245 "r_mbytes_per_sec": 0, 00:31:36.245 "w_mbytes_per_sec": 0 00:31:36.245 }, 00:31:36.245 "claimed": true, 00:31:36.245 "claim_type": "exclusive_write", 00:31:36.245 "zoned": false, 00:31:36.245 "supported_io_types": { 00:31:36.245 "read": true, 00:31:36.245 "write": true, 00:31:36.245 "unmap": true, 00:31:36.245 "write_zeroes": true, 00:31:36.245 "flush": true, 00:31:36.245 "reset": true, 00:31:36.245 "compare": false, 00:31:36.245 "compare_and_write": false, 00:31:36.245 "abort": true, 00:31:36.245 "nvme_admin": false, 00:31:36.245 "nvme_io": false 00:31:36.245 }, 00:31:36.245 "memory_domains": [ 00:31:36.245 { 00:31:36.245 "dma_device_id": "system", 00:31:36.245 "dma_device_type": 1 00:31:36.245 }, 00:31:36.245 { 00:31:36.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.245 "dma_device_type": 2 00:31:36.245 } 00:31:36.245 ], 00:31:36.245 "driver_specific": {} 00:31:36.245 }' 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:36.245 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:36.504 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:36.504 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:36.504 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:36.504 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:36.504 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:36.504 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:36.504 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:36.504 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:36.504 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:36.504 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:36.763 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:36.763 "name": "BaseBdev3", 00:31:36.763 "aliases": [ 00:31:36.763 "6062931f-1d5d-4b03-beff-232c848eabdb" 00:31:36.763 ], 00:31:36.763 "product_name": "Malloc disk", 00:31:36.763 "block_size": 512, 00:31:36.763 "num_blocks": 65536, 00:31:36.763 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:36.763 "assigned_rate_limits": { 00:31:36.763 "rw_ios_per_sec": 0, 00:31:36.763 "rw_mbytes_per_sec": 0, 00:31:36.763 "r_mbytes_per_sec": 0, 00:31:36.763 "w_mbytes_per_sec": 0 00:31:36.763 }, 00:31:36.763 "claimed": true, 00:31:36.763 "claim_type": "exclusive_write", 00:31:36.763 "zoned": false, 00:31:36.763 "supported_io_types": { 00:31:36.763 "read": true, 00:31:36.763 "write": true, 00:31:36.763 "unmap": true, 00:31:36.763 "write_zeroes": true, 00:31:36.763 "flush": true, 00:31:36.763 "reset": true, 00:31:36.763 "compare": false, 00:31:36.763 "compare_and_write": false, 00:31:36.763 "abort": true, 00:31:36.763 "nvme_admin": false, 00:31:36.763 "nvme_io": false 00:31:36.763 }, 00:31:36.763 "memory_domains": [ 00:31:36.763 { 00:31:36.763 "dma_device_id": "system", 00:31:36.763 "dma_device_type": 1 00:31:36.763 }, 00:31:36.763 { 00:31:36.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.763 "dma_device_type": 2 00:31:36.763 } 00:31:36.763 ], 00:31:36.763 "driver_specific": {} 00:31:36.763 }' 00:31:36.763 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:36.763 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:36.763 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:36.763 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:37.021 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:37.279 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:37.279 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:37.279 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:37.279 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:31:37.279 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:37.279 "name": "BaseBdev4", 00:31:37.279 "aliases": [ 00:31:37.279 "97e4190e-0f25-474e-9c04-444ad2c9d239" 00:31:37.279 ], 00:31:37.279 "product_name": "Malloc disk", 00:31:37.279 "block_size": 512, 00:31:37.279 "num_blocks": 65536, 00:31:37.279 "uuid": "97e4190e-0f25-474e-9c04-444ad2c9d239", 00:31:37.279 "assigned_rate_limits": { 00:31:37.279 "rw_ios_per_sec": 0, 00:31:37.279 "rw_mbytes_per_sec": 0, 00:31:37.279 "r_mbytes_per_sec": 0, 00:31:37.279 "w_mbytes_per_sec": 0 00:31:37.279 }, 00:31:37.279 "claimed": true, 00:31:37.279 "claim_type": "exclusive_write", 00:31:37.279 "zoned": false, 00:31:37.279 "supported_io_types": { 00:31:37.279 "read": true, 00:31:37.279 "write": true, 00:31:37.279 "unmap": true, 00:31:37.279 "write_zeroes": true, 00:31:37.279 "flush": true, 00:31:37.279 "reset": true, 00:31:37.279 "compare": false, 00:31:37.279 "compare_and_write": false, 00:31:37.279 "abort": true, 00:31:37.280 "nvme_admin": false, 00:31:37.280 "nvme_io": false 00:31:37.280 }, 00:31:37.280 "memory_domains": [ 00:31:37.280 { 00:31:37.280 "dma_device_id": "system", 00:31:37.280 "dma_device_type": 1 00:31:37.280 }, 00:31:37.280 { 00:31:37.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:37.280 "dma_device_type": 2 00:31:37.280 } 00:31:37.280 ], 00:31:37.280 "driver_specific": {} 00:31:37.280 }' 00:31:37.280 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:37.605 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:37.605 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:37.869 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:37.869 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:37.869 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:37.869 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:37.869 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:38.127 [2024-05-15 11:24:56.567797] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:38.127 [2024-05-15 11:24:56.567851] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:38.127 [2024-05-15 11:24:56.567912] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:38.127 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.128 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:38.386 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:38.386 "name": "Existed_Raid", 00:31:38.386 "uuid": "54564e65-90d9-4aad-8e8b-980b4e48b39d", 00:31:38.386 "strip_size_kb": 64, 00:31:38.386 "state": "offline", 00:31:38.386 "raid_level": "concat", 00:31:38.386 "superblock": false, 00:31:38.386 "num_base_bdevs": 4, 00:31:38.386 "num_base_bdevs_discovered": 3, 00:31:38.386 "num_base_bdevs_operational": 3, 00:31:38.386 "base_bdevs_list": [ 00:31:38.386 { 00:31:38.386 "name": null, 00:31:38.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:38.386 "is_configured": false, 00:31:38.386 "data_offset": 0, 00:31:38.386 "data_size": 65536 00:31:38.386 }, 00:31:38.386 { 00:31:38.386 "name": "BaseBdev2", 00:31:38.386 "uuid": "90cfd86e-b3de-41fd-885e-5be5c3c8b112", 00:31:38.386 "is_configured": true, 00:31:38.386 "data_offset": 0, 00:31:38.386 "data_size": 65536 00:31:38.386 }, 00:31:38.386 { 00:31:38.386 "name": "BaseBdev3", 00:31:38.386 "uuid": "6062931f-1d5d-4b03-beff-232c848eabdb", 00:31:38.386 "is_configured": true, 00:31:38.386 "data_offset": 0, 00:31:38.386 "data_size": 65536 00:31:38.386 }, 00:31:38.386 { 00:31:38.386 "name": "BaseBdev4", 00:31:38.386 "uuid": "97e4190e-0f25-474e-9c04-444ad2c9d239", 00:31:38.386 "is_configured": true, 00:31:38.386 "data_offset": 0, 00:31:38.386 "data_size": 65536 00:31:38.386 } 00:31:38.386 ] 00:31:38.386 }' 00:31:38.386 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:38.386 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.953 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:38.953 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:38.953 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.953 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:31:39.212 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:31:39.212 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:39.212 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:39.471 [2024-05-15 11:24:57.872650] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:39.471 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:39.471 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:39.471 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.471 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:31:39.729 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:31:39.729 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:39.729 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:39.988 [2024-05-15 11:24:58.424991] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:39.988 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:39.988 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:39.988 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:31:39.988 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.246 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:31:40.246 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:40.246 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:31:40.504 [2024-05-15 11:24:58.945067] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:31:40.504 [2024-05-15 11:24:58.945135] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:31:40.504 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:40.504 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:40.504 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:31:40.504 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:31:40.763 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:41.021 BaseBdev2 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:41.021 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:41.331 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:41.331 [ 00:31:41.331 { 00:31:41.331 "name": "BaseBdev2", 00:31:41.331 "aliases": [ 00:31:41.331 "f3d97060-0911-4dc5-849e-796e3f1e3441" 00:31:41.331 ], 00:31:41.331 "product_name": "Malloc disk", 00:31:41.331 "block_size": 512, 00:31:41.331 "num_blocks": 65536, 00:31:41.331 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:41.332 "assigned_rate_limits": { 00:31:41.332 "rw_ios_per_sec": 0, 00:31:41.332 "rw_mbytes_per_sec": 0, 00:31:41.332 "r_mbytes_per_sec": 0, 00:31:41.332 "w_mbytes_per_sec": 0 00:31:41.332 }, 00:31:41.332 "claimed": false, 00:31:41.332 "zoned": false, 00:31:41.332 "supported_io_types": { 00:31:41.332 "read": true, 00:31:41.332 "write": true, 00:31:41.332 "unmap": true, 00:31:41.332 "write_zeroes": true, 00:31:41.332 "flush": true, 00:31:41.332 "reset": true, 00:31:41.332 "compare": false, 00:31:41.332 "compare_and_write": false, 00:31:41.332 "abort": true, 00:31:41.332 "nvme_admin": false, 00:31:41.332 "nvme_io": false 00:31:41.332 }, 00:31:41.332 "memory_domains": [ 00:31:41.332 { 00:31:41.332 "dma_device_id": "system", 00:31:41.332 "dma_device_type": 1 00:31:41.332 }, 00:31:41.332 { 00:31:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:41.332 "dma_device_type": 2 00:31:41.332 } 00:31:41.332 ], 00:31:41.332 "driver_specific": {} 00:31:41.332 } 00:31:41.332 ] 00:31:41.332 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:41.332 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:31:41.332 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:31:41.332 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:41.590 BaseBdev3 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:41.590 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:41.849 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:42.107 [ 00:31:42.107 { 00:31:42.107 "name": "BaseBdev3", 00:31:42.107 "aliases": [ 00:31:42.107 "fa61a315-ca49-4102-a6c1-677ae920208a" 00:31:42.107 ], 00:31:42.107 "product_name": "Malloc disk", 00:31:42.107 "block_size": 512, 00:31:42.107 "num_blocks": 65536, 00:31:42.107 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:42.107 "assigned_rate_limits": { 00:31:42.107 "rw_ios_per_sec": 0, 00:31:42.107 "rw_mbytes_per_sec": 0, 00:31:42.107 "r_mbytes_per_sec": 0, 00:31:42.107 "w_mbytes_per_sec": 0 00:31:42.107 }, 00:31:42.107 "claimed": false, 00:31:42.107 "zoned": false, 00:31:42.107 "supported_io_types": { 00:31:42.107 "read": true, 00:31:42.107 "write": true, 00:31:42.107 "unmap": true, 00:31:42.107 "write_zeroes": true, 00:31:42.107 "flush": true, 00:31:42.107 "reset": true, 00:31:42.107 "compare": false, 00:31:42.107 "compare_and_write": false, 00:31:42.107 "abort": true, 00:31:42.107 "nvme_admin": false, 00:31:42.107 "nvme_io": false 00:31:42.107 }, 00:31:42.107 "memory_domains": [ 00:31:42.107 { 00:31:42.107 "dma_device_id": "system", 00:31:42.107 "dma_device_type": 1 00:31:42.107 }, 00:31:42.107 { 00:31:42.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.107 "dma_device_type": 2 00:31:42.107 } 00:31:42.107 ], 00:31:42.107 "driver_specific": {} 00:31:42.107 } 00:31:42.107 ] 00:31:42.107 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:42.107 11:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:31:42.107 11:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:31:42.107 11:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:31:42.107 BaseBdev4 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:42.366 11:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:42.624 [ 00:31:42.624 { 00:31:42.624 "name": "BaseBdev4", 00:31:42.624 "aliases": [ 00:31:42.624 "b1ef8851-6b42-4917-87cc-c971e83f115c" 00:31:42.624 ], 00:31:42.624 "product_name": "Malloc disk", 00:31:42.624 "block_size": 512, 00:31:42.624 "num_blocks": 65536, 00:31:42.624 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:42.624 "assigned_rate_limits": { 00:31:42.624 "rw_ios_per_sec": 0, 00:31:42.624 "rw_mbytes_per_sec": 0, 00:31:42.624 "r_mbytes_per_sec": 0, 00:31:42.624 "w_mbytes_per_sec": 0 00:31:42.624 }, 00:31:42.624 "claimed": false, 00:31:42.624 "zoned": false, 00:31:42.624 "supported_io_types": { 00:31:42.624 "read": true, 00:31:42.624 "write": true, 00:31:42.624 "unmap": true, 00:31:42.624 "write_zeroes": true, 00:31:42.624 "flush": true, 00:31:42.624 "reset": true, 00:31:42.624 "compare": false, 00:31:42.624 "compare_and_write": false, 00:31:42.624 "abort": true, 00:31:42.624 "nvme_admin": false, 00:31:42.624 "nvme_io": false 00:31:42.624 }, 00:31:42.624 "memory_domains": [ 00:31:42.624 { 00:31:42.624 "dma_device_id": "system", 00:31:42.624 "dma_device_type": 1 00:31:42.624 }, 00:31:42.624 { 00:31:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.624 "dma_device_type": 2 00:31:42.624 } 00:31:42.624 ], 00:31:42.624 "driver_specific": {} 00:31:42.624 } 00:31:42.624 ] 00:31:42.624 11:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:42.624 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:31:42.624 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:31:42.624 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:31:42.883 [2024-05-15 11:25:01.350923] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:42.883 [2024-05-15 11:25:01.350997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:42.883 [2024-05-15 11:25:01.351030] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:42.883 [2024-05-15 11:25:01.352684] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:42.883 [2024-05-15 11:25:01.352728] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.883 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.141 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:43.141 "name": "Existed_Raid", 00:31:43.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.141 "strip_size_kb": 64, 00:31:43.141 "state": "configuring", 00:31:43.141 "raid_level": "concat", 00:31:43.141 "superblock": false, 00:31:43.141 "num_base_bdevs": 4, 00:31:43.141 "num_base_bdevs_discovered": 3, 00:31:43.141 "num_base_bdevs_operational": 4, 00:31:43.141 "base_bdevs_list": [ 00:31:43.141 { 00:31:43.141 "name": "BaseBdev1", 00:31:43.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.141 "is_configured": false, 00:31:43.141 "data_offset": 0, 00:31:43.141 "data_size": 0 00:31:43.141 }, 00:31:43.141 { 00:31:43.141 "name": "BaseBdev2", 00:31:43.141 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:43.141 "is_configured": true, 00:31:43.141 "data_offset": 0, 00:31:43.141 "data_size": 65536 00:31:43.141 }, 00:31:43.141 { 00:31:43.141 "name": "BaseBdev3", 00:31:43.141 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:43.141 "is_configured": true, 00:31:43.141 "data_offset": 0, 00:31:43.141 "data_size": 65536 00:31:43.141 }, 00:31:43.141 { 00:31:43.141 "name": "BaseBdev4", 00:31:43.141 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:43.141 "is_configured": true, 00:31:43.141 "data_offset": 0, 00:31:43.141 "data_size": 65536 00:31:43.141 } 00:31:43.141 ] 00:31:43.141 }' 00:31:43.141 11:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:43.141 11:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.709 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:43.967 [2024-05-15 11:25:02.483093] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:43.967 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.968 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.226 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:44.226 "name": "Existed_Raid", 00:31:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.226 "strip_size_kb": 64, 00:31:44.226 "state": "configuring", 00:31:44.226 "raid_level": "concat", 00:31:44.226 "superblock": false, 00:31:44.226 "num_base_bdevs": 4, 00:31:44.226 "num_base_bdevs_discovered": 2, 00:31:44.226 "num_base_bdevs_operational": 4, 00:31:44.226 "base_bdevs_list": [ 00:31:44.226 { 00:31:44.226 "name": "BaseBdev1", 00:31:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.226 "is_configured": false, 00:31:44.226 "data_offset": 0, 00:31:44.226 "data_size": 0 00:31:44.226 }, 00:31:44.226 { 00:31:44.226 "name": null, 00:31:44.226 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:44.226 "is_configured": false, 00:31:44.226 "data_offset": 0, 00:31:44.226 "data_size": 65536 00:31:44.226 }, 00:31:44.226 { 00:31:44.226 "name": "BaseBdev3", 00:31:44.226 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:44.226 "is_configured": true, 00:31:44.226 "data_offset": 0, 00:31:44.226 "data_size": 65536 00:31:44.226 }, 00:31:44.226 { 00:31:44.226 "name": "BaseBdev4", 00:31:44.226 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:44.226 "is_configured": true, 00:31:44.226 "data_offset": 0, 00:31:44.226 "data_size": 65536 00:31:44.226 } 00:31:44.226 ] 00:31:44.226 }' 00:31:44.226 11:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:44.226 11:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.794 11:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.794 11:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:45.053 11:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:31:45.053 11:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:45.312 BaseBdev1 00:31:45.312 [2024-05-15 11:25:03.897193] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:45.312 11:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:45.577 11:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:45.836 [ 00:31:45.836 { 00:31:45.836 "name": "BaseBdev1", 00:31:45.836 "aliases": [ 00:31:45.836 "d09a6246-b6d5-4178-b322-2febe04077fa" 00:31:45.836 ], 00:31:45.836 "product_name": "Malloc disk", 00:31:45.836 "block_size": 512, 00:31:45.836 "num_blocks": 65536, 00:31:45.836 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:45.836 "assigned_rate_limits": { 00:31:45.836 "rw_ios_per_sec": 0, 00:31:45.836 "rw_mbytes_per_sec": 0, 00:31:45.836 "r_mbytes_per_sec": 0, 00:31:45.836 "w_mbytes_per_sec": 0 00:31:45.836 }, 00:31:45.836 "claimed": true, 00:31:45.836 "claim_type": "exclusive_write", 00:31:45.836 "zoned": false, 00:31:45.836 "supported_io_types": { 00:31:45.836 "read": true, 00:31:45.836 "write": true, 00:31:45.836 "unmap": true, 00:31:45.836 "write_zeroes": true, 00:31:45.836 "flush": true, 00:31:45.836 "reset": true, 00:31:45.836 "compare": false, 00:31:45.836 "compare_and_write": false, 00:31:45.836 "abort": true, 00:31:45.836 "nvme_admin": false, 00:31:45.836 "nvme_io": false 00:31:45.836 }, 00:31:45.836 "memory_domains": [ 00:31:45.836 { 00:31:45.836 "dma_device_id": "system", 00:31:45.836 "dma_device_type": 1 00:31:45.836 }, 00:31:45.836 { 00:31:45.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.836 "dma_device_type": 2 00:31:45.836 } 00:31:45.836 ], 00:31:45.836 "driver_specific": {} 00:31:45.836 } 00:31:45.836 ] 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.836 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:46.096 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:46.096 "name": "Existed_Raid", 00:31:46.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.096 "strip_size_kb": 64, 00:31:46.096 "state": "configuring", 00:31:46.096 "raid_level": "concat", 00:31:46.096 "superblock": false, 00:31:46.096 "num_base_bdevs": 4, 00:31:46.096 "num_base_bdevs_discovered": 3, 00:31:46.096 "num_base_bdevs_operational": 4, 00:31:46.096 "base_bdevs_list": [ 00:31:46.096 { 00:31:46.096 "name": "BaseBdev1", 00:31:46.096 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:46.096 "is_configured": true, 00:31:46.096 "data_offset": 0, 00:31:46.096 "data_size": 65536 00:31:46.096 }, 00:31:46.096 { 00:31:46.096 "name": null, 00:31:46.096 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:46.096 "is_configured": false, 00:31:46.096 "data_offset": 0, 00:31:46.096 "data_size": 65536 00:31:46.096 }, 00:31:46.096 { 00:31:46.096 "name": "BaseBdev3", 00:31:46.096 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:46.096 "is_configured": true, 00:31:46.096 "data_offset": 0, 00:31:46.096 "data_size": 65536 00:31:46.096 }, 00:31:46.096 { 00:31:46.096 "name": "BaseBdev4", 00:31:46.096 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:46.096 "is_configured": true, 00:31:46.096 "data_offset": 0, 00:31:46.096 "data_size": 65536 00:31:46.096 } 00:31:46.096 ] 00:31:46.096 }' 00:31:46.096 11:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:46.096 11:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.663 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.663 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:46.921 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:46.921 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:31:47.178 [2024-05-15 11:25:05.565593] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:47.178 "name": "Existed_Raid", 00:31:47.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.178 "strip_size_kb": 64, 00:31:47.178 "state": "configuring", 00:31:47.178 "raid_level": "concat", 00:31:47.178 "superblock": false, 00:31:47.178 "num_base_bdevs": 4, 00:31:47.178 "num_base_bdevs_discovered": 2, 00:31:47.178 "num_base_bdevs_operational": 4, 00:31:47.178 "base_bdevs_list": [ 00:31:47.178 { 00:31:47.178 "name": "BaseBdev1", 00:31:47.178 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:47.178 "is_configured": true, 00:31:47.178 "data_offset": 0, 00:31:47.178 "data_size": 65536 00:31:47.178 }, 00:31:47.178 { 00:31:47.178 "name": null, 00:31:47.178 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:47.178 "is_configured": false, 00:31:47.178 "data_offset": 0, 00:31:47.178 "data_size": 65536 00:31:47.178 }, 00:31:47.178 { 00:31:47.178 "name": null, 00:31:47.178 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:47.178 "is_configured": false, 00:31:47.178 "data_offset": 0, 00:31:47.178 "data_size": 65536 00:31:47.178 }, 00:31:47.178 { 00:31:47.178 "name": "BaseBdev4", 00:31:47.178 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:47.178 "is_configured": true, 00:31:47.178 "data_offset": 0, 00:31:47.178 "data_size": 65536 00:31:47.178 } 00:31:47.178 ] 00:31:47.178 }' 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:47.178 11:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.016 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:48.016 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.274 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:31:48.274 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:48.531 [2024-05-15 11:25:06.925860] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.531 11:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.817 11:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:48.817 "name": "Existed_Raid", 00:31:48.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.817 "strip_size_kb": 64, 00:31:48.817 "state": "configuring", 00:31:48.817 "raid_level": "concat", 00:31:48.817 "superblock": false, 00:31:48.817 "num_base_bdevs": 4, 00:31:48.817 "num_base_bdevs_discovered": 3, 00:31:48.817 "num_base_bdevs_operational": 4, 00:31:48.817 "base_bdevs_list": [ 00:31:48.817 { 00:31:48.817 "name": "BaseBdev1", 00:31:48.817 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:48.817 "is_configured": true, 00:31:48.817 "data_offset": 0, 00:31:48.817 "data_size": 65536 00:31:48.817 }, 00:31:48.817 { 00:31:48.817 "name": null, 00:31:48.817 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:48.817 "is_configured": false, 00:31:48.817 "data_offset": 0, 00:31:48.817 "data_size": 65536 00:31:48.817 }, 00:31:48.817 { 00:31:48.817 "name": "BaseBdev3", 00:31:48.817 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:48.817 "is_configured": true, 00:31:48.817 "data_offset": 0, 00:31:48.817 "data_size": 65536 00:31:48.817 }, 00:31:48.817 { 00:31:48.817 "name": "BaseBdev4", 00:31:48.817 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:48.817 "is_configured": true, 00:31:48.817 "data_offset": 0, 00:31:48.817 "data_size": 65536 00:31:48.817 } 00:31:48.817 ] 00:31:48.817 }' 00:31:48.817 11:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:48.817 11:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.390 11:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.390 11:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:49.648 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:31:49.648 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:49.648 [2024-05-15 11:25:08.262028] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.907 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.166 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:50.166 "name": "Existed_Raid", 00:31:50.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.166 "strip_size_kb": 64, 00:31:50.166 "state": "configuring", 00:31:50.166 "raid_level": "concat", 00:31:50.166 "superblock": false, 00:31:50.166 "num_base_bdevs": 4, 00:31:50.166 "num_base_bdevs_discovered": 2, 00:31:50.166 "num_base_bdevs_operational": 4, 00:31:50.166 "base_bdevs_list": [ 00:31:50.166 { 00:31:50.166 "name": null, 00:31:50.166 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:50.166 "is_configured": false, 00:31:50.166 "data_offset": 0, 00:31:50.166 "data_size": 65536 00:31:50.166 }, 00:31:50.166 { 00:31:50.166 "name": null, 00:31:50.166 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:50.166 "is_configured": false, 00:31:50.166 "data_offset": 0, 00:31:50.166 "data_size": 65536 00:31:50.166 }, 00:31:50.166 { 00:31:50.166 "name": "BaseBdev3", 00:31:50.166 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:50.166 "is_configured": true, 00:31:50.166 "data_offset": 0, 00:31:50.166 "data_size": 65536 00:31:50.166 }, 00:31:50.166 { 00:31:50.166 "name": "BaseBdev4", 00:31:50.166 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:50.166 "is_configured": true, 00:31:50.166 "data_offset": 0, 00:31:50.166 "data_size": 65536 00:31:50.166 } 00:31:50.166 ] 00:31:50.166 }' 00:31:50.166 11:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:50.166 11:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.733 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.733 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:50.991 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:31:50.991 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:51.250 [2024-05-15 11:25:09.674536] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.250 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.508 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:51.508 "name": "Existed_Raid", 00:31:51.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.508 "strip_size_kb": 64, 00:31:51.508 "state": "configuring", 00:31:51.508 "raid_level": "concat", 00:31:51.508 "superblock": false, 00:31:51.508 "num_base_bdevs": 4, 00:31:51.508 "num_base_bdevs_discovered": 3, 00:31:51.508 "num_base_bdevs_operational": 4, 00:31:51.508 "base_bdevs_list": [ 00:31:51.508 { 00:31:51.508 "name": null, 00:31:51.508 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:51.508 "is_configured": false, 00:31:51.508 "data_offset": 0, 00:31:51.508 "data_size": 65536 00:31:51.508 }, 00:31:51.508 { 00:31:51.508 "name": "BaseBdev2", 00:31:51.508 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:51.508 "is_configured": true, 00:31:51.508 "data_offset": 0, 00:31:51.508 "data_size": 65536 00:31:51.508 }, 00:31:51.508 { 00:31:51.508 "name": "BaseBdev3", 00:31:51.508 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:51.508 "is_configured": true, 00:31:51.508 "data_offset": 0, 00:31:51.508 "data_size": 65536 00:31:51.508 }, 00:31:51.508 { 00:31:51.508 "name": "BaseBdev4", 00:31:51.508 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:51.508 "is_configured": true, 00:31:51.508 "data_offset": 0, 00:31:51.508 "data_size": 65536 00:31:51.508 } 00:31:51.508 ] 00:31:51.508 }' 00:31:51.508 11:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:51.508 11:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.083 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.083 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:52.397 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:31:52.397 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.397 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:52.397 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d09a6246-b6d5-4178-b322-2febe04077fa 00:31:52.655 [2024-05-15 11:25:11.229896] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:52.655 [2024-05-15 11:25:11.229944] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:31:52.655 [2024-05-15 11:25:11.229954] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:31:52.655 [2024-05-15 11:25:11.230075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:52.655 [2024-05-15 11:25:11.230290] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:31:52.655 [2024-05-15 11:25:11.230305] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:31:52.655 [2024-05-15 11:25:11.230485] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.655 NewBaseBdev 00:31:52.655 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:31:52.656 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:52.914 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:53.173 [ 00:31:53.173 { 00:31:53.173 "name": "NewBaseBdev", 00:31:53.173 "aliases": [ 00:31:53.173 "d09a6246-b6d5-4178-b322-2febe04077fa" 00:31:53.173 ], 00:31:53.173 "product_name": "Malloc disk", 00:31:53.173 "block_size": 512, 00:31:53.173 "num_blocks": 65536, 00:31:53.173 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:53.173 "assigned_rate_limits": { 00:31:53.173 "rw_ios_per_sec": 0, 00:31:53.173 "rw_mbytes_per_sec": 0, 00:31:53.173 "r_mbytes_per_sec": 0, 00:31:53.173 "w_mbytes_per_sec": 0 00:31:53.173 }, 00:31:53.173 "claimed": true, 00:31:53.173 "claim_type": "exclusive_write", 00:31:53.173 "zoned": false, 00:31:53.173 "supported_io_types": { 00:31:53.173 "read": true, 00:31:53.173 "write": true, 00:31:53.173 "unmap": true, 00:31:53.173 "write_zeroes": true, 00:31:53.173 "flush": true, 00:31:53.173 "reset": true, 00:31:53.173 "compare": false, 00:31:53.173 "compare_and_write": false, 00:31:53.173 "abort": true, 00:31:53.173 "nvme_admin": false, 00:31:53.173 "nvme_io": false 00:31:53.173 }, 00:31:53.173 "memory_domains": [ 00:31:53.173 { 00:31:53.173 "dma_device_id": "system", 00:31:53.173 "dma_device_type": 1 00:31:53.173 }, 00:31:53.173 { 00:31:53.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.173 "dma_device_type": 2 00:31:53.173 } 00:31:53.173 ], 00:31:53.173 "driver_specific": {} 00:31:53.173 } 00:31:53.173 ] 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.173 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:53.431 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:53.431 "name": "Existed_Raid", 00:31:53.431 "uuid": "b5e86caa-7d6f-4c78-8b83-f730af698cb0", 00:31:53.431 "strip_size_kb": 64, 00:31:53.431 "state": "online", 00:31:53.431 "raid_level": "concat", 00:31:53.431 "superblock": false, 00:31:53.431 "num_base_bdevs": 4, 00:31:53.431 "num_base_bdevs_discovered": 4, 00:31:53.431 "num_base_bdevs_operational": 4, 00:31:53.431 "base_bdevs_list": [ 00:31:53.431 { 00:31:53.431 "name": "NewBaseBdev", 00:31:53.431 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:53.431 "is_configured": true, 00:31:53.431 "data_offset": 0, 00:31:53.431 "data_size": 65536 00:31:53.431 }, 00:31:53.431 { 00:31:53.431 "name": "BaseBdev2", 00:31:53.431 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:53.431 "is_configured": true, 00:31:53.431 "data_offset": 0, 00:31:53.431 "data_size": 65536 00:31:53.431 }, 00:31:53.431 { 00:31:53.431 "name": "BaseBdev3", 00:31:53.431 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:53.431 "is_configured": true, 00:31:53.431 "data_offset": 0, 00:31:53.431 "data_size": 65536 00:31:53.431 }, 00:31:53.431 { 00:31:53.431 "name": "BaseBdev4", 00:31:53.431 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:53.431 "is_configured": true, 00:31:53.431 "data_offset": 0, 00:31:53.431 "data_size": 65536 00:31:53.431 } 00:31:53.431 ] 00:31:53.431 }' 00:31:53.431 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:53.432 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:31:54.366 [2024-05-15 11:25:12.838452] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:31:54.366 "name": "Existed_Raid", 00:31:54.366 "aliases": [ 00:31:54.366 "b5e86caa-7d6f-4c78-8b83-f730af698cb0" 00:31:54.366 ], 00:31:54.366 "product_name": "Raid Volume", 00:31:54.366 "block_size": 512, 00:31:54.366 "num_blocks": 262144, 00:31:54.366 "uuid": "b5e86caa-7d6f-4c78-8b83-f730af698cb0", 00:31:54.366 "assigned_rate_limits": { 00:31:54.366 "rw_ios_per_sec": 0, 00:31:54.366 "rw_mbytes_per_sec": 0, 00:31:54.366 "r_mbytes_per_sec": 0, 00:31:54.366 "w_mbytes_per_sec": 0 00:31:54.366 }, 00:31:54.366 "claimed": false, 00:31:54.366 "zoned": false, 00:31:54.366 "supported_io_types": { 00:31:54.366 "read": true, 00:31:54.366 "write": true, 00:31:54.366 "unmap": true, 00:31:54.366 "write_zeroes": true, 00:31:54.366 "flush": true, 00:31:54.366 "reset": true, 00:31:54.366 "compare": false, 00:31:54.366 "compare_and_write": false, 00:31:54.366 "abort": false, 00:31:54.366 "nvme_admin": false, 00:31:54.366 "nvme_io": false 00:31:54.366 }, 00:31:54.366 "memory_domains": [ 00:31:54.366 { 00:31:54.366 "dma_device_id": "system", 00:31:54.366 "dma_device_type": 1 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.366 "dma_device_type": 2 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "system", 00:31:54.366 "dma_device_type": 1 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.366 "dma_device_type": 2 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "system", 00:31:54.366 "dma_device_type": 1 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.366 "dma_device_type": 2 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "system", 00:31:54.366 "dma_device_type": 1 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.366 "dma_device_type": 2 00:31:54.366 } 00:31:54.366 ], 00:31:54.366 "driver_specific": { 00:31:54.366 "raid": { 00:31:54.366 "uuid": "b5e86caa-7d6f-4c78-8b83-f730af698cb0", 00:31:54.366 "strip_size_kb": 64, 00:31:54.366 "state": "online", 00:31:54.366 "raid_level": "concat", 00:31:54.366 "superblock": false, 00:31:54.366 "num_base_bdevs": 4, 00:31:54.366 "num_base_bdevs_discovered": 4, 00:31:54.366 "num_base_bdevs_operational": 4, 00:31:54.366 "base_bdevs_list": [ 00:31:54.366 { 00:31:54.366 "name": "NewBaseBdev", 00:31:54.366 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:54.366 "is_configured": true, 00:31:54.366 "data_offset": 0, 00:31:54.366 "data_size": 65536 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "name": "BaseBdev2", 00:31:54.366 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:54.366 "is_configured": true, 00:31:54.366 "data_offset": 0, 00:31:54.366 "data_size": 65536 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "name": "BaseBdev3", 00:31:54.366 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:54.366 "is_configured": true, 00:31:54.366 "data_offset": 0, 00:31:54.366 "data_size": 65536 00:31:54.366 }, 00:31:54.366 { 00:31:54.366 "name": "BaseBdev4", 00:31:54.366 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:54.366 "is_configured": true, 00:31:54.366 "data_offset": 0, 00:31:54.366 "data_size": 65536 00:31:54.366 } 00:31:54.366 ] 00:31:54.366 } 00:31:54.366 } 00:31:54.366 }' 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:31:54.366 BaseBdev2 00:31:54.366 BaseBdev3 00:31:54.366 BaseBdev4' 00:31:54.366 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:54.367 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:54.367 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:54.625 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:54.625 "name": "NewBaseBdev", 00:31:54.625 "aliases": [ 00:31:54.625 "d09a6246-b6d5-4178-b322-2febe04077fa" 00:31:54.625 ], 00:31:54.625 "product_name": "Malloc disk", 00:31:54.625 "block_size": 512, 00:31:54.625 "num_blocks": 65536, 00:31:54.625 "uuid": "d09a6246-b6d5-4178-b322-2febe04077fa", 00:31:54.625 "assigned_rate_limits": { 00:31:54.625 "rw_ios_per_sec": 0, 00:31:54.625 "rw_mbytes_per_sec": 0, 00:31:54.625 "r_mbytes_per_sec": 0, 00:31:54.625 "w_mbytes_per_sec": 0 00:31:54.625 }, 00:31:54.625 "claimed": true, 00:31:54.625 "claim_type": "exclusive_write", 00:31:54.625 "zoned": false, 00:31:54.625 "supported_io_types": { 00:31:54.625 "read": true, 00:31:54.625 "write": true, 00:31:54.625 "unmap": true, 00:31:54.625 "write_zeroes": true, 00:31:54.625 "flush": true, 00:31:54.625 "reset": true, 00:31:54.625 "compare": false, 00:31:54.625 "compare_and_write": false, 00:31:54.625 "abort": true, 00:31:54.625 "nvme_admin": false, 00:31:54.625 "nvme_io": false 00:31:54.625 }, 00:31:54.625 "memory_domains": [ 00:31:54.625 { 00:31:54.625 "dma_device_id": "system", 00:31:54.625 "dma_device_type": 1 00:31:54.625 }, 00:31:54.625 { 00:31:54.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.625 "dma_device_type": 2 00:31:54.625 } 00:31:54.625 ], 00:31:54.625 "driver_specific": {} 00:31:54.625 }' 00:31:54.625 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:54.625 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:54.625 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:54.625 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:54.882 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:54.882 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:54.882 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:54.883 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:54.883 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:54.883 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:54.883 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:55.140 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:55.140 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:55.140 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:55.140 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:55.399 "name": "BaseBdev2", 00:31:55.399 "aliases": [ 00:31:55.399 "f3d97060-0911-4dc5-849e-796e3f1e3441" 00:31:55.399 ], 00:31:55.399 "product_name": "Malloc disk", 00:31:55.399 "block_size": 512, 00:31:55.399 "num_blocks": 65536, 00:31:55.399 "uuid": "f3d97060-0911-4dc5-849e-796e3f1e3441", 00:31:55.399 "assigned_rate_limits": { 00:31:55.399 "rw_ios_per_sec": 0, 00:31:55.399 "rw_mbytes_per_sec": 0, 00:31:55.399 "r_mbytes_per_sec": 0, 00:31:55.399 "w_mbytes_per_sec": 0 00:31:55.399 }, 00:31:55.399 "claimed": true, 00:31:55.399 "claim_type": "exclusive_write", 00:31:55.399 "zoned": false, 00:31:55.399 "supported_io_types": { 00:31:55.399 "read": true, 00:31:55.399 "write": true, 00:31:55.399 "unmap": true, 00:31:55.399 "write_zeroes": true, 00:31:55.399 "flush": true, 00:31:55.399 "reset": true, 00:31:55.399 "compare": false, 00:31:55.399 "compare_and_write": false, 00:31:55.399 "abort": true, 00:31:55.399 "nvme_admin": false, 00:31:55.399 "nvme_io": false 00:31:55.399 }, 00:31:55.399 "memory_domains": [ 00:31:55.399 { 00:31:55.399 "dma_device_id": "system", 00:31:55.399 "dma_device_type": 1 00:31:55.399 }, 00:31:55.399 { 00:31:55.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.399 "dma_device_type": 2 00:31:55.399 } 00:31:55.399 ], 00:31:55.399 "driver_specific": {} 00:31:55.399 }' 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:55.399 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:55.399 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:55.657 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:55.916 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:55.916 "name": "BaseBdev3", 00:31:55.916 "aliases": [ 00:31:55.916 "fa61a315-ca49-4102-a6c1-677ae920208a" 00:31:55.916 ], 00:31:55.916 "product_name": "Malloc disk", 00:31:55.916 "block_size": 512, 00:31:55.916 "num_blocks": 65536, 00:31:55.916 "uuid": "fa61a315-ca49-4102-a6c1-677ae920208a", 00:31:55.916 "assigned_rate_limits": { 00:31:55.916 "rw_ios_per_sec": 0, 00:31:55.916 "rw_mbytes_per_sec": 0, 00:31:55.916 "r_mbytes_per_sec": 0, 00:31:55.916 "w_mbytes_per_sec": 0 00:31:55.916 }, 00:31:55.916 "claimed": true, 00:31:55.916 "claim_type": "exclusive_write", 00:31:55.916 "zoned": false, 00:31:55.916 "supported_io_types": { 00:31:55.916 "read": true, 00:31:55.916 "write": true, 00:31:55.916 "unmap": true, 00:31:55.916 "write_zeroes": true, 00:31:55.916 "flush": true, 00:31:55.916 "reset": true, 00:31:55.916 "compare": false, 00:31:55.916 "compare_and_write": false, 00:31:55.916 "abort": true, 00:31:55.916 "nvme_admin": false, 00:31:55.916 "nvme_io": false 00:31:55.916 }, 00:31:55.916 "memory_domains": [ 00:31:55.916 { 00:31:55.916 "dma_device_id": "system", 00:31:55.916 "dma_device_type": 1 00:31:55.916 }, 00:31:55.916 { 00:31:55.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.916 "dma_device_type": 2 00:31:55.916 } 00:31:55.916 ], 00:31:55.916 "driver_specific": {} 00:31:55.916 }' 00:31:55.916 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:55.916 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:55.916 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:55.916 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:56.184 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:56.457 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:56.457 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:31:56.457 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:31:56.457 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:31:56.457 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:31:56.457 "name": "BaseBdev4", 00:31:56.457 "aliases": [ 00:31:56.457 "b1ef8851-6b42-4917-87cc-c971e83f115c" 00:31:56.457 ], 00:31:56.457 "product_name": "Malloc disk", 00:31:56.457 "block_size": 512, 00:31:56.457 "num_blocks": 65536, 00:31:56.457 "uuid": "b1ef8851-6b42-4917-87cc-c971e83f115c", 00:31:56.457 "assigned_rate_limits": { 00:31:56.457 "rw_ios_per_sec": 0, 00:31:56.457 "rw_mbytes_per_sec": 0, 00:31:56.457 "r_mbytes_per_sec": 0, 00:31:56.457 "w_mbytes_per_sec": 0 00:31:56.457 }, 00:31:56.457 "claimed": true, 00:31:56.457 "claim_type": "exclusive_write", 00:31:56.457 "zoned": false, 00:31:56.457 "supported_io_types": { 00:31:56.457 "read": true, 00:31:56.457 "write": true, 00:31:56.457 "unmap": true, 00:31:56.457 "write_zeroes": true, 00:31:56.457 "flush": true, 00:31:56.457 "reset": true, 00:31:56.457 "compare": false, 00:31:56.457 "compare_and_write": false, 00:31:56.457 "abort": true, 00:31:56.457 "nvme_admin": false, 00:31:56.457 "nvme_io": false 00:31:56.457 }, 00:31:56.457 "memory_domains": [ 00:31:56.457 { 00:31:56.457 "dma_device_id": "system", 00:31:56.457 "dma_device_type": 1 00:31:56.457 }, 00:31:56.457 { 00:31:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:56.457 "dma_device_type": 2 00:31:56.457 } 00:31:56.457 ], 00:31:56.457 "driver_specific": {} 00:31:56.457 }' 00:31:56.457 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:56.457 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:56.716 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:31:56.974 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:56.974 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:56.974 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:31:56.974 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:31:56.974 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:57.233 [2024-05-15 11:25:15.722794] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:57.233 [2024-05-15 11:25:15.722863] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.233 [2024-05-15 11:25:15.722932] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.233 [2024-05-15 11:25:15.722979] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:57.233 [2024-05-15 11:25:15.722990] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 67279 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 67279 ']' 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 67279 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67279 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:57.233 killing process with pid 67279 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67279' 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 67279 00:31:57.233 11:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 67279 00:31:57.233 [2024-05-15 11:25:15.762099] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:57.493 [2024-05-15 11:25:16.076595] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:58.868 ************************************ 00:31:58.868 END TEST raid_state_function_test 00:31:58.868 ************************************ 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:31:58.868 00:31:58.868 real 0m34.050s 00:31:58.868 user 1m4.033s 00:31:58.868 sys 0m3.574s 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.868 11:25:17 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:31:58.868 11:25:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:31:58.868 11:25:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:58.868 11:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:58.868 ************************************ 00:31:58.868 START TEST raid_state_function_test_sb 00:31:58.868 ************************************ 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:58.868 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:31:58.869 Process raid pid: 68384 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=68384 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 68384' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 68384 /var/tmp/spdk-raid.sock 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 68384 ']' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:58.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:58.869 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.869 [2024-05-15 11:25:17.468892] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:31:58.869 [2024-05-15 11:25:17.469094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.127 [2024-05-15 11:25:17.623337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.385 [2024-05-15 11:25:17.841603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.644 [2024-05-15 11:25:18.029208] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:59.902 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:59.902 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:31:59.902 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:31:59.902 [2024-05-15 11:25:18.530470] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:59.902 [2024-05-15 11:25:18.530559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:59.902 [2024-05-15 11:25:18.530580] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:59.902 [2024-05-15 11:25:18.530601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:59.902 [2024-05-15 11:25:18.530610] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:59.902 [2024-05-15 11:25:18.530656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:59.902 [2024-05-15 11:25:18.530667] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:59.902 [2024-05-15 11:25:18.530690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:00.161 "name": "Existed_Raid", 00:32:00.161 "uuid": "69be3ba9-432b-4aca-bc4e-df8af85e3506", 00:32:00.161 "strip_size_kb": 64, 00:32:00.161 "state": "configuring", 00:32:00.161 "raid_level": "concat", 00:32:00.161 "superblock": true, 00:32:00.161 "num_base_bdevs": 4, 00:32:00.161 "num_base_bdevs_discovered": 0, 00:32:00.161 "num_base_bdevs_operational": 4, 00:32:00.161 "base_bdevs_list": [ 00:32:00.161 { 00:32:00.161 "name": "BaseBdev1", 00:32:00.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.161 "is_configured": false, 00:32:00.161 "data_offset": 0, 00:32:00.161 "data_size": 0 00:32:00.161 }, 00:32:00.161 { 00:32:00.161 "name": "BaseBdev2", 00:32:00.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.161 "is_configured": false, 00:32:00.161 "data_offset": 0, 00:32:00.161 "data_size": 0 00:32:00.161 }, 00:32:00.161 { 00:32:00.161 "name": "BaseBdev3", 00:32:00.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.161 "is_configured": false, 00:32:00.161 "data_offset": 0, 00:32:00.161 "data_size": 0 00:32:00.161 }, 00:32:00.161 { 00:32:00.161 "name": "BaseBdev4", 00:32:00.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.161 "is_configured": false, 00:32:00.161 "data_offset": 0, 00:32:00.161 "data_size": 0 00:32:00.161 } 00:32:00.161 ] 00:32:00.161 }' 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:00.161 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.097 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:01.097 [2024-05-15 11:25:19.670455] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:01.097 [2024-05-15 11:25:19.670499] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:32:01.097 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:01.355 [2024-05-15 11:25:19.870535] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:01.355 [2024-05-15 11:25:19.870618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:01.355 [2024-05-15 11:25:19.870650] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:01.355 [2024-05-15 11:25:19.870677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:01.355 [2024-05-15 11:25:19.870686] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:01.355 [2024-05-15 11:25:19.870704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:01.355 [2024-05-15 11:25:19.870712] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:01.355 [2024-05-15 11:25:19.870742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:01.355 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:01.614 BaseBdev1 00:32:01.614 [2024-05-15 11:25:20.154929] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:01.614 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:01.873 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:02.132 [ 00:32:02.132 { 00:32:02.132 "name": "BaseBdev1", 00:32:02.132 "aliases": [ 00:32:02.132 "1c43cefa-5dad-412e-85a1-d68618486178" 00:32:02.132 ], 00:32:02.132 "product_name": "Malloc disk", 00:32:02.132 "block_size": 512, 00:32:02.132 "num_blocks": 65536, 00:32:02.132 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:02.132 "assigned_rate_limits": { 00:32:02.132 "rw_ios_per_sec": 0, 00:32:02.132 "rw_mbytes_per_sec": 0, 00:32:02.132 "r_mbytes_per_sec": 0, 00:32:02.132 "w_mbytes_per_sec": 0 00:32:02.132 }, 00:32:02.132 "claimed": true, 00:32:02.132 "claim_type": "exclusive_write", 00:32:02.132 "zoned": false, 00:32:02.132 "supported_io_types": { 00:32:02.132 "read": true, 00:32:02.132 "write": true, 00:32:02.132 "unmap": true, 00:32:02.132 "write_zeroes": true, 00:32:02.132 "flush": true, 00:32:02.132 "reset": true, 00:32:02.132 "compare": false, 00:32:02.132 "compare_and_write": false, 00:32:02.132 "abort": true, 00:32:02.132 "nvme_admin": false, 00:32:02.132 "nvme_io": false 00:32:02.132 }, 00:32:02.132 "memory_domains": [ 00:32:02.132 { 00:32:02.132 "dma_device_id": "system", 00:32:02.132 "dma_device_type": 1 00:32:02.132 }, 00:32:02.132 { 00:32:02.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.132 "dma_device_type": 2 00:32:02.132 } 00:32:02.132 ], 00:32:02.132 "driver_specific": {} 00:32:02.132 } 00:32:02.132 ] 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.132 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.391 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:02.391 "name": "Existed_Raid", 00:32:02.391 "uuid": "7f305f41-f064-4824-ab2f-f5766c31f382", 00:32:02.391 "strip_size_kb": 64, 00:32:02.391 "state": "configuring", 00:32:02.391 "raid_level": "concat", 00:32:02.391 "superblock": true, 00:32:02.391 "num_base_bdevs": 4, 00:32:02.391 "num_base_bdevs_discovered": 1, 00:32:02.391 "num_base_bdevs_operational": 4, 00:32:02.391 "base_bdevs_list": [ 00:32:02.391 { 00:32:02.391 "name": "BaseBdev1", 00:32:02.391 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:02.391 "is_configured": true, 00:32:02.391 "data_offset": 2048, 00:32:02.391 "data_size": 63488 00:32:02.391 }, 00:32:02.391 { 00:32:02.391 "name": "BaseBdev2", 00:32:02.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.391 "is_configured": false, 00:32:02.391 "data_offset": 0, 00:32:02.391 "data_size": 0 00:32:02.391 }, 00:32:02.391 { 00:32:02.391 "name": "BaseBdev3", 00:32:02.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.391 "is_configured": false, 00:32:02.391 "data_offset": 0, 00:32:02.391 "data_size": 0 00:32:02.391 }, 00:32:02.391 { 00:32:02.391 "name": "BaseBdev4", 00:32:02.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.391 "is_configured": false, 00:32:02.391 "data_offset": 0, 00:32:02.391 "data_size": 0 00:32:02.391 } 00:32:02.391 ] 00:32:02.391 }' 00:32:02.391 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:02.391 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.957 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:03.215 [2024-05-15 11:25:21.603192] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:03.215 [2024-05-15 11:25:21.603240] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:03.215 [2024-05-15 11:25:21.831341] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:03.215 [2024-05-15 11:25:21.833016] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:03.215 [2024-05-15 11:25:21.833094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:03.215 [2024-05-15 11:25:21.833119] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:03.215 [2024-05-15 11:25:21.833149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:03.215 [2024-05-15 11:25:21.833159] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:03.215 [2024-05-15 11:25:21.833177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:03.215 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:03.216 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:03.216 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.216 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.782 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:03.782 "name": "Existed_Raid", 00:32:03.782 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:03.782 "strip_size_kb": 64, 00:32:03.782 "state": "configuring", 00:32:03.782 "raid_level": "concat", 00:32:03.782 "superblock": true, 00:32:03.782 "num_base_bdevs": 4, 00:32:03.782 "num_base_bdevs_discovered": 1, 00:32:03.782 "num_base_bdevs_operational": 4, 00:32:03.782 "base_bdevs_list": [ 00:32:03.782 { 00:32:03.782 "name": "BaseBdev1", 00:32:03.782 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:03.782 "is_configured": true, 00:32:03.782 "data_offset": 2048, 00:32:03.782 "data_size": 63488 00:32:03.782 }, 00:32:03.782 { 00:32:03.782 "name": "BaseBdev2", 00:32:03.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.782 "is_configured": false, 00:32:03.782 "data_offset": 0, 00:32:03.782 "data_size": 0 00:32:03.782 }, 00:32:03.782 { 00:32:03.782 "name": "BaseBdev3", 00:32:03.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.782 "is_configured": false, 00:32:03.782 "data_offset": 0, 00:32:03.782 "data_size": 0 00:32:03.782 }, 00:32:03.782 { 00:32:03.782 "name": "BaseBdev4", 00:32:03.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.782 "is_configured": false, 00:32:03.782 "data_offset": 0, 00:32:03.782 "data_size": 0 00:32:03.782 } 00:32:03.782 ] 00:32:03.782 }' 00:32:03.782 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:03.782 11:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.349 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:04.607 BaseBdev2 00:32:04.607 [2024-05-15 11:25:23.018952] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:04.607 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:04.869 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:04.869 [ 00:32:04.869 { 00:32:04.869 "name": "BaseBdev2", 00:32:04.869 "aliases": [ 00:32:04.869 "947e2035-2e5f-4a07-87c7-cdb14f852ec4" 00:32:04.869 ], 00:32:04.869 "product_name": "Malloc disk", 00:32:04.869 "block_size": 512, 00:32:04.869 "num_blocks": 65536, 00:32:04.869 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:04.869 "assigned_rate_limits": { 00:32:04.869 "rw_ios_per_sec": 0, 00:32:04.869 "rw_mbytes_per_sec": 0, 00:32:04.869 "r_mbytes_per_sec": 0, 00:32:04.869 "w_mbytes_per_sec": 0 00:32:04.869 }, 00:32:04.869 "claimed": true, 00:32:04.869 "claim_type": "exclusive_write", 00:32:04.869 "zoned": false, 00:32:04.869 "supported_io_types": { 00:32:04.869 "read": true, 00:32:04.869 "write": true, 00:32:04.869 "unmap": true, 00:32:04.869 "write_zeroes": true, 00:32:04.869 "flush": true, 00:32:04.869 "reset": true, 00:32:04.869 "compare": false, 00:32:04.869 "compare_and_write": false, 00:32:04.869 "abort": true, 00:32:04.869 "nvme_admin": false, 00:32:04.869 "nvme_io": false 00:32:04.869 }, 00:32:04.869 "memory_domains": [ 00:32:04.869 { 00:32:04.869 "dma_device_id": "system", 00:32:04.869 "dma_device_type": 1 00:32:04.869 }, 00:32:04.869 { 00:32:04.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:04.869 "dma_device_type": 2 00:32:04.869 } 00:32:04.870 ], 00:32:04.870 "driver_specific": {} 00:32:04.870 } 00:32:04.870 ] 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.870 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:05.127 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:05.127 "name": "Existed_Raid", 00:32:05.127 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:05.127 "strip_size_kb": 64, 00:32:05.127 "state": "configuring", 00:32:05.127 "raid_level": "concat", 00:32:05.127 "superblock": true, 00:32:05.127 "num_base_bdevs": 4, 00:32:05.127 "num_base_bdevs_discovered": 2, 00:32:05.127 "num_base_bdevs_operational": 4, 00:32:05.127 "base_bdevs_list": [ 00:32:05.127 { 00:32:05.127 "name": "BaseBdev1", 00:32:05.127 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:05.127 "is_configured": true, 00:32:05.127 "data_offset": 2048, 00:32:05.127 "data_size": 63488 00:32:05.127 }, 00:32:05.127 { 00:32:05.127 "name": "BaseBdev2", 00:32:05.127 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:05.127 "is_configured": true, 00:32:05.128 "data_offset": 2048, 00:32:05.128 "data_size": 63488 00:32:05.128 }, 00:32:05.128 { 00:32:05.128 "name": "BaseBdev3", 00:32:05.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.128 "is_configured": false, 00:32:05.128 "data_offset": 0, 00:32:05.128 "data_size": 0 00:32:05.128 }, 00:32:05.128 { 00:32:05.128 "name": "BaseBdev4", 00:32:05.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.128 "is_configured": false, 00:32:05.128 "data_offset": 0, 00:32:05.128 "data_size": 0 00:32:05.128 } 00:32:05.128 ] 00:32:05.128 }' 00:32:05.128 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:05.128 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:06.062 [2024-05-15 11:25:24.661292] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:06.062 BaseBdev3 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:06.062 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:06.629 11:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:06.629 [ 00:32:06.629 { 00:32:06.629 "name": "BaseBdev3", 00:32:06.629 "aliases": [ 00:32:06.629 "a4213f83-335f-4f77-aa94-ce70195cb628" 00:32:06.629 ], 00:32:06.629 "product_name": "Malloc disk", 00:32:06.629 "block_size": 512, 00:32:06.629 "num_blocks": 65536, 00:32:06.629 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:06.629 "assigned_rate_limits": { 00:32:06.629 "rw_ios_per_sec": 0, 00:32:06.629 "rw_mbytes_per_sec": 0, 00:32:06.629 "r_mbytes_per_sec": 0, 00:32:06.629 "w_mbytes_per_sec": 0 00:32:06.629 }, 00:32:06.629 "claimed": true, 00:32:06.629 "claim_type": "exclusive_write", 00:32:06.629 "zoned": false, 00:32:06.629 "supported_io_types": { 00:32:06.629 "read": true, 00:32:06.629 "write": true, 00:32:06.629 "unmap": true, 00:32:06.629 "write_zeroes": true, 00:32:06.629 "flush": true, 00:32:06.629 "reset": true, 00:32:06.629 "compare": false, 00:32:06.629 "compare_and_write": false, 00:32:06.629 "abort": true, 00:32:06.629 "nvme_admin": false, 00:32:06.629 "nvme_io": false 00:32:06.629 }, 00:32:06.629 "memory_domains": [ 00:32:06.629 { 00:32:06.629 "dma_device_id": "system", 00:32:06.629 "dma_device_type": 1 00:32:06.630 }, 00:32:06.630 { 00:32:06.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.630 "dma_device_type": 2 00:32:06.630 } 00:32:06.630 ], 00:32:06.630 "driver_specific": {} 00:32:06.630 } 00:32:06.630 ] 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:06.630 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.888 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:06.888 "name": "Existed_Raid", 00:32:06.888 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:06.888 "strip_size_kb": 64, 00:32:06.888 "state": "configuring", 00:32:06.888 "raid_level": "concat", 00:32:06.888 "superblock": true, 00:32:06.888 "num_base_bdevs": 4, 00:32:06.888 "num_base_bdevs_discovered": 3, 00:32:06.888 "num_base_bdevs_operational": 4, 00:32:06.888 "base_bdevs_list": [ 00:32:06.888 { 00:32:06.888 "name": "BaseBdev1", 00:32:06.888 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:06.888 "is_configured": true, 00:32:06.888 "data_offset": 2048, 00:32:06.888 "data_size": 63488 00:32:06.888 }, 00:32:06.888 { 00:32:06.888 "name": "BaseBdev2", 00:32:06.888 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:06.888 "is_configured": true, 00:32:06.888 "data_offset": 2048, 00:32:06.888 "data_size": 63488 00:32:06.888 }, 00:32:06.888 { 00:32:06.888 "name": "BaseBdev3", 00:32:06.888 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:06.888 "is_configured": true, 00:32:06.888 "data_offset": 2048, 00:32:06.888 "data_size": 63488 00:32:06.888 }, 00:32:06.888 { 00:32:06.888 "name": "BaseBdev4", 00:32:06.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.888 "is_configured": false, 00:32:06.888 "data_offset": 0, 00:32:06.888 "data_size": 0 00:32:06.888 } 00:32:06.888 ] 00:32:06.888 }' 00:32:06.888 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:06.888 11:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:07.823 [2024-05-15 11:25:26.350393] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:07.823 [2024-05-15 11:25:26.350595] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:32:07.823 [2024-05-15 11:25:26.350611] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:07.823 [2024-05-15 11:25:26.350722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:32:07.823 BaseBdev4 00:32:07.823 [2024-05-15 11:25:26.351221] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:32:07.823 [2024-05-15 11:25:26.351240] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:32:07.823 [2024-05-15 11:25:26.351357] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:07.823 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:08.082 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:08.341 [ 00:32:08.341 { 00:32:08.341 "name": "BaseBdev4", 00:32:08.341 "aliases": [ 00:32:08.341 "36d45e7f-9085-4010-83e5-fa355f959f23" 00:32:08.341 ], 00:32:08.341 "product_name": "Malloc disk", 00:32:08.341 "block_size": 512, 00:32:08.341 "num_blocks": 65536, 00:32:08.341 "uuid": "36d45e7f-9085-4010-83e5-fa355f959f23", 00:32:08.341 "assigned_rate_limits": { 00:32:08.341 "rw_ios_per_sec": 0, 00:32:08.341 "rw_mbytes_per_sec": 0, 00:32:08.341 "r_mbytes_per_sec": 0, 00:32:08.341 "w_mbytes_per_sec": 0 00:32:08.341 }, 00:32:08.341 "claimed": true, 00:32:08.341 "claim_type": "exclusive_write", 00:32:08.341 "zoned": false, 00:32:08.341 "supported_io_types": { 00:32:08.341 "read": true, 00:32:08.341 "write": true, 00:32:08.341 "unmap": true, 00:32:08.341 "write_zeroes": true, 00:32:08.341 "flush": true, 00:32:08.341 "reset": true, 00:32:08.341 "compare": false, 00:32:08.341 "compare_and_write": false, 00:32:08.341 "abort": true, 00:32:08.341 "nvme_admin": false, 00:32:08.341 "nvme_io": false 00:32:08.341 }, 00:32:08.341 "memory_domains": [ 00:32:08.341 { 00:32:08.341 "dma_device_id": "system", 00:32:08.341 "dma_device_type": 1 00:32:08.341 }, 00:32:08.341 { 00:32:08.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.341 "dma_device_type": 2 00:32:08.341 } 00:32:08.341 ], 00:32:08.341 "driver_specific": {} 00:32:08.341 } 00:32:08.341 ] 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:08.341 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.600 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:08.600 "name": "Existed_Raid", 00:32:08.600 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:08.600 "strip_size_kb": 64, 00:32:08.600 "state": "online", 00:32:08.600 "raid_level": "concat", 00:32:08.600 "superblock": true, 00:32:08.600 "num_base_bdevs": 4, 00:32:08.600 "num_base_bdevs_discovered": 4, 00:32:08.600 "num_base_bdevs_operational": 4, 00:32:08.600 "base_bdevs_list": [ 00:32:08.600 { 00:32:08.600 "name": "BaseBdev1", 00:32:08.600 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:08.600 "is_configured": true, 00:32:08.600 "data_offset": 2048, 00:32:08.600 "data_size": 63488 00:32:08.600 }, 00:32:08.600 { 00:32:08.600 "name": "BaseBdev2", 00:32:08.600 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:08.600 "is_configured": true, 00:32:08.600 "data_offset": 2048, 00:32:08.600 "data_size": 63488 00:32:08.600 }, 00:32:08.600 { 00:32:08.600 "name": "BaseBdev3", 00:32:08.600 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:08.600 "is_configured": true, 00:32:08.600 "data_offset": 2048, 00:32:08.600 "data_size": 63488 00:32:08.600 }, 00:32:08.600 { 00:32:08.600 "name": "BaseBdev4", 00:32:08.600 "uuid": "36d45e7f-9085-4010-83e5-fa355f959f23", 00:32:08.600 "is_configured": true, 00:32:08.600 "data_offset": 2048, 00:32:08.600 "data_size": 63488 00:32:08.600 } 00:32:08.600 ] 00:32:08.600 }' 00:32:08.600 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:08.600 11:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:32:09.168 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:09.428 [2024-05-15 11:25:27.822879] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:09.428 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:32:09.428 "name": "Existed_Raid", 00:32:09.428 "aliases": [ 00:32:09.428 "b94a3efa-2cec-4dd5-8d8c-b444a190d217" 00:32:09.428 ], 00:32:09.428 "product_name": "Raid Volume", 00:32:09.428 "block_size": 512, 00:32:09.428 "num_blocks": 253952, 00:32:09.428 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:09.428 "assigned_rate_limits": { 00:32:09.428 "rw_ios_per_sec": 0, 00:32:09.428 "rw_mbytes_per_sec": 0, 00:32:09.428 "r_mbytes_per_sec": 0, 00:32:09.428 "w_mbytes_per_sec": 0 00:32:09.428 }, 00:32:09.428 "claimed": false, 00:32:09.428 "zoned": false, 00:32:09.428 "supported_io_types": { 00:32:09.428 "read": true, 00:32:09.428 "write": true, 00:32:09.428 "unmap": true, 00:32:09.428 "write_zeroes": true, 00:32:09.428 "flush": true, 00:32:09.428 "reset": true, 00:32:09.428 "compare": false, 00:32:09.428 "compare_and_write": false, 00:32:09.428 "abort": false, 00:32:09.428 "nvme_admin": false, 00:32:09.428 "nvme_io": false 00:32:09.428 }, 00:32:09.428 "memory_domains": [ 00:32:09.428 { 00:32:09.428 "dma_device_id": "system", 00:32:09.428 "dma_device_type": 1 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.428 "dma_device_type": 2 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "system", 00:32:09.428 "dma_device_type": 1 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.428 "dma_device_type": 2 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "system", 00:32:09.428 "dma_device_type": 1 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.428 "dma_device_type": 2 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "system", 00:32:09.428 "dma_device_type": 1 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.428 "dma_device_type": 2 00:32:09.428 } 00:32:09.428 ], 00:32:09.428 "driver_specific": { 00:32:09.428 "raid": { 00:32:09.428 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:09.428 "strip_size_kb": 64, 00:32:09.428 "state": "online", 00:32:09.428 "raid_level": "concat", 00:32:09.428 "superblock": true, 00:32:09.428 "num_base_bdevs": 4, 00:32:09.428 "num_base_bdevs_discovered": 4, 00:32:09.428 "num_base_bdevs_operational": 4, 00:32:09.428 "base_bdevs_list": [ 00:32:09.428 { 00:32:09.428 "name": "BaseBdev1", 00:32:09.428 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:09.428 "is_configured": true, 00:32:09.428 "data_offset": 2048, 00:32:09.428 "data_size": 63488 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "name": "BaseBdev2", 00:32:09.428 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:09.428 "is_configured": true, 00:32:09.428 "data_offset": 2048, 00:32:09.428 "data_size": 63488 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "name": "BaseBdev3", 00:32:09.428 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:09.428 "is_configured": true, 00:32:09.428 "data_offset": 2048, 00:32:09.428 "data_size": 63488 00:32:09.428 }, 00:32:09.428 { 00:32:09.428 "name": "BaseBdev4", 00:32:09.428 "uuid": "36d45e7f-9085-4010-83e5-fa355f959f23", 00:32:09.428 "is_configured": true, 00:32:09.429 "data_offset": 2048, 00:32:09.429 "data_size": 63488 00:32:09.429 } 00:32:09.429 ] 00:32:09.429 } 00:32:09.429 } 00:32:09.429 }' 00:32:09.429 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:09.429 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:32:09.429 BaseBdev2 00:32:09.429 BaseBdev3 00:32:09.429 BaseBdev4' 00:32:09.429 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:09.429 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:09.429 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:09.687 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:09.687 "name": "BaseBdev1", 00:32:09.687 "aliases": [ 00:32:09.687 "1c43cefa-5dad-412e-85a1-d68618486178" 00:32:09.687 ], 00:32:09.687 "product_name": "Malloc disk", 00:32:09.687 "block_size": 512, 00:32:09.687 "num_blocks": 65536, 00:32:09.687 "uuid": "1c43cefa-5dad-412e-85a1-d68618486178", 00:32:09.687 "assigned_rate_limits": { 00:32:09.687 "rw_ios_per_sec": 0, 00:32:09.687 "rw_mbytes_per_sec": 0, 00:32:09.687 "r_mbytes_per_sec": 0, 00:32:09.687 "w_mbytes_per_sec": 0 00:32:09.687 }, 00:32:09.687 "claimed": true, 00:32:09.687 "claim_type": "exclusive_write", 00:32:09.687 "zoned": false, 00:32:09.687 "supported_io_types": { 00:32:09.687 "read": true, 00:32:09.687 "write": true, 00:32:09.687 "unmap": true, 00:32:09.687 "write_zeroes": true, 00:32:09.687 "flush": true, 00:32:09.687 "reset": true, 00:32:09.687 "compare": false, 00:32:09.687 "compare_and_write": false, 00:32:09.687 "abort": true, 00:32:09.687 "nvme_admin": false, 00:32:09.687 "nvme_io": false 00:32:09.687 }, 00:32:09.687 "memory_domains": [ 00:32:09.687 { 00:32:09.687 "dma_device_id": "system", 00:32:09.687 "dma_device_type": 1 00:32:09.687 }, 00:32:09.687 { 00:32:09.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.687 "dma_device_type": 2 00:32:09.687 } 00:32:09.687 ], 00:32:09.687 "driver_specific": {} 00:32:09.687 }' 00:32:09.687 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:09.687 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:09.687 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:09.687 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:09.946 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:10.204 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:10.204 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:10.205 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:10.205 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:10.205 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:10.205 "name": "BaseBdev2", 00:32:10.205 "aliases": [ 00:32:10.205 "947e2035-2e5f-4a07-87c7-cdb14f852ec4" 00:32:10.205 ], 00:32:10.205 "product_name": "Malloc disk", 00:32:10.205 "block_size": 512, 00:32:10.205 "num_blocks": 65536, 00:32:10.205 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:10.205 "assigned_rate_limits": { 00:32:10.205 "rw_ios_per_sec": 0, 00:32:10.205 "rw_mbytes_per_sec": 0, 00:32:10.205 "r_mbytes_per_sec": 0, 00:32:10.205 "w_mbytes_per_sec": 0 00:32:10.205 }, 00:32:10.205 "claimed": true, 00:32:10.205 "claim_type": "exclusive_write", 00:32:10.205 "zoned": false, 00:32:10.205 "supported_io_types": { 00:32:10.205 "read": true, 00:32:10.205 "write": true, 00:32:10.205 "unmap": true, 00:32:10.205 "write_zeroes": true, 00:32:10.205 "flush": true, 00:32:10.205 "reset": true, 00:32:10.205 "compare": false, 00:32:10.205 "compare_and_write": false, 00:32:10.205 "abort": true, 00:32:10.205 "nvme_admin": false, 00:32:10.205 "nvme_io": false 00:32:10.205 }, 00:32:10.205 "memory_domains": [ 00:32:10.205 { 00:32:10.205 "dma_device_id": "system", 00:32:10.205 "dma_device_type": 1 00:32:10.205 }, 00:32:10.205 { 00:32:10.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.205 "dma_device_type": 2 00:32:10.205 } 00:32:10.205 ], 00:32:10.205 "driver_specific": {} 00:32:10.205 }' 00:32:10.205 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:10.464 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:10.464 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:10.464 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:10.464 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:10.464 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:10.464 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:10.723 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:10.982 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:10.982 "name": "BaseBdev3", 00:32:10.982 "aliases": [ 00:32:10.982 "a4213f83-335f-4f77-aa94-ce70195cb628" 00:32:10.982 ], 00:32:10.982 "product_name": "Malloc disk", 00:32:10.982 "block_size": 512, 00:32:10.982 "num_blocks": 65536, 00:32:10.982 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:10.982 "assigned_rate_limits": { 00:32:10.982 "rw_ios_per_sec": 0, 00:32:10.982 "rw_mbytes_per_sec": 0, 00:32:10.982 "r_mbytes_per_sec": 0, 00:32:10.982 "w_mbytes_per_sec": 0 00:32:10.982 }, 00:32:10.982 "claimed": true, 00:32:10.982 "claim_type": "exclusive_write", 00:32:10.982 "zoned": false, 00:32:10.982 "supported_io_types": { 00:32:10.982 "read": true, 00:32:10.982 "write": true, 00:32:10.982 "unmap": true, 00:32:10.982 "write_zeroes": true, 00:32:10.982 "flush": true, 00:32:10.982 "reset": true, 00:32:10.982 "compare": false, 00:32:10.982 "compare_and_write": false, 00:32:10.982 "abort": true, 00:32:10.982 "nvme_admin": false, 00:32:10.982 "nvme_io": false 00:32:10.982 }, 00:32:10.982 "memory_domains": [ 00:32:10.982 { 00:32:10.982 "dma_device_id": "system", 00:32:10.982 "dma_device_type": 1 00:32:10.982 }, 00:32:10.982 { 00:32:10.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.982 "dma_device_type": 2 00:32:10.982 } 00:32:10.982 ], 00:32:10.982 "driver_specific": {} 00:32:10.982 }' 00:32:10.982 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:10.982 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:11.241 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:11.500 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:11.500 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:11.500 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:11.500 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:11.500 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:11.759 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:11.759 "name": "BaseBdev4", 00:32:11.759 "aliases": [ 00:32:11.759 "36d45e7f-9085-4010-83e5-fa355f959f23" 00:32:11.759 ], 00:32:11.759 "product_name": "Malloc disk", 00:32:11.759 "block_size": 512, 00:32:11.759 "num_blocks": 65536, 00:32:11.759 "uuid": "36d45e7f-9085-4010-83e5-fa355f959f23", 00:32:11.759 "assigned_rate_limits": { 00:32:11.759 "rw_ios_per_sec": 0, 00:32:11.759 "rw_mbytes_per_sec": 0, 00:32:11.759 "r_mbytes_per_sec": 0, 00:32:11.759 "w_mbytes_per_sec": 0 00:32:11.759 }, 00:32:11.759 "claimed": true, 00:32:11.759 "claim_type": "exclusive_write", 00:32:11.759 "zoned": false, 00:32:11.759 "supported_io_types": { 00:32:11.759 "read": true, 00:32:11.759 "write": true, 00:32:11.759 "unmap": true, 00:32:11.759 "write_zeroes": true, 00:32:11.759 "flush": true, 00:32:11.759 "reset": true, 00:32:11.759 "compare": false, 00:32:11.759 "compare_and_write": false, 00:32:11.759 "abort": true, 00:32:11.759 "nvme_admin": false, 00:32:11.759 "nvme_io": false 00:32:11.759 }, 00:32:11.759 "memory_domains": [ 00:32:11.759 { 00:32:11.759 "dma_device_id": "system", 00:32:11.759 "dma_device_type": 1 00:32:11.759 }, 00:32:11.759 { 00:32:11.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.759 "dma_device_type": 2 00:32:11.759 } 00:32:11.759 ], 00:32:11.759 "driver_specific": {} 00:32:11.759 }' 00:32:11.759 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:11.759 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:11.759 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:11.759 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:12.018 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:12.276 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:12.276 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:12.276 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:12.536 [2024-05-15 11:25:30.947409] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:12.536 [2024-05-15 11:25:30.947450] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:12.536 [2024-05-15 11:25:30.947496] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.536 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.795 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:12.795 "name": "Existed_Raid", 00:32:12.795 "uuid": "b94a3efa-2cec-4dd5-8d8c-b444a190d217", 00:32:12.795 "strip_size_kb": 64, 00:32:12.795 "state": "offline", 00:32:12.795 "raid_level": "concat", 00:32:12.795 "superblock": true, 00:32:12.795 "num_base_bdevs": 4, 00:32:12.795 "num_base_bdevs_discovered": 3, 00:32:12.795 "num_base_bdevs_operational": 3, 00:32:12.795 "base_bdevs_list": [ 00:32:12.795 { 00:32:12.795 "name": null, 00:32:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.795 "is_configured": false, 00:32:12.795 "data_offset": 2048, 00:32:12.795 "data_size": 63488 00:32:12.795 }, 00:32:12.795 { 00:32:12.795 "name": "BaseBdev2", 00:32:12.795 "uuid": "947e2035-2e5f-4a07-87c7-cdb14f852ec4", 00:32:12.795 "is_configured": true, 00:32:12.795 "data_offset": 2048, 00:32:12.795 "data_size": 63488 00:32:12.795 }, 00:32:12.795 { 00:32:12.795 "name": "BaseBdev3", 00:32:12.795 "uuid": "a4213f83-335f-4f77-aa94-ce70195cb628", 00:32:12.795 "is_configured": true, 00:32:12.795 "data_offset": 2048, 00:32:12.795 "data_size": 63488 00:32:12.795 }, 00:32:12.795 { 00:32:12.795 "name": "BaseBdev4", 00:32:12.795 "uuid": "36d45e7f-9085-4010-83e5-fa355f959f23", 00:32:12.795 "is_configured": true, 00:32:12.795 "data_offset": 2048, 00:32:12.795 "data_size": 63488 00:32:12.795 } 00:32:12.795 ] 00:32:12.795 }' 00:32:12.795 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:12.795 11:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.363 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:13.363 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:13.363 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.363 11:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:32:13.622 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:32:13.622 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:13.622 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:13.882 [2024-05-15 11:25:32.457929] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:14.141 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:14.400 [2024-05-15 11:25:32.917395] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:14.400 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:14.400 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:14.400 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.400 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:32:14.662 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:32:14.662 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:14.662 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:32:14.922 [2024-05-15 11:25:33.411104] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:14.922 [2024-05-15 11:25:33.411164] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:32:14.922 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:14.922 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:14.922 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.922 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:32:15.181 11:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:15.440 BaseBdev2 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:15.440 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:15.721 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:15.981 [ 00:32:15.981 { 00:32:15.981 "name": "BaseBdev2", 00:32:15.981 "aliases": [ 00:32:15.981 "0fcdba27-dd98-449d-8ebf-de45ee770456" 00:32:15.981 ], 00:32:15.981 "product_name": "Malloc disk", 00:32:15.981 "block_size": 512, 00:32:15.981 "num_blocks": 65536, 00:32:15.981 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:15.981 "assigned_rate_limits": { 00:32:15.981 "rw_ios_per_sec": 0, 00:32:15.981 "rw_mbytes_per_sec": 0, 00:32:15.981 "r_mbytes_per_sec": 0, 00:32:15.981 "w_mbytes_per_sec": 0 00:32:15.981 }, 00:32:15.981 "claimed": false, 00:32:15.981 "zoned": false, 00:32:15.981 "supported_io_types": { 00:32:15.981 "read": true, 00:32:15.981 "write": true, 00:32:15.981 "unmap": true, 00:32:15.981 "write_zeroes": true, 00:32:15.981 "flush": true, 00:32:15.981 "reset": true, 00:32:15.981 "compare": false, 00:32:15.981 "compare_and_write": false, 00:32:15.981 "abort": true, 00:32:15.981 "nvme_admin": false, 00:32:15.981 "nvme_io": false 00:32:15.981 }, 00:32:15.981 "memory_domains": [ 00:32:15.981 { 00:32:15.981 "dma_device_id": "system", 00:32:15.981 "dma_device_type": 1 00:32:15.981 }, 00:32:15.981 { 00:32:15.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.981 "dma_device_type": 2 00:32:15.981 } 00:32:15.981 ], 00:32:15.981 "driver_specific": {} 00:32:15.981 } 00:32:15.981 ] 00:32:15.981 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:15.981 11:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:32:15.981 11:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:32:15.981 11:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:16.239 BaseBdev3 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:16.240 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:16.498 11:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:16.498 [ 00:32:16.498 { 00:32:16.498 "name": "BaseBdev3", 00:32:16.498 "aliases": [ 00:32:16.498 "a547105d-bd2b-4d26-8dcd-7f2999803034" 00:32:16.498 ], 00:32:16.498 "product_name": "Malloc disk", 00:32:16.498 "block_size": 512, 00:32:16.498 "num_blocks": 65536, 00:32:16.498 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:16.498 "assigned_rate_limits": { 00:32:16.498 "rw_ios_per_sec": 0, 00:32:16.498 "rw_mbytes_per_sec": 0, 00:32:16.498 "r_mbytes_per_sec": 0, 00:32:16.498 "w_mbytes_per_sec": 0 00:32:16.498 }, 00:32:16.498 "claimed": false, 00:32:16.498 "zoned": false, 00:32:16.498 "supported_io_types": { 00:32:16.498 "read": true, 00:32:16.498 "write": true, 00:32:16.498 "unmap": true, 00:32:16.498 "write_zeroes": true, 00:32:16.498 "flush": true, 00:32:16.498 "reset": true, 00:32:16.498 "compare": false, 00:32:16.498 "compare_and_write": false, 00:32:16.498 "abort": true, 00:32:16.498 "nvme_admin": false, 00:32:16.498 "nvme_io": false 00:32:16.498 }, 00:32:16.498 "memory_domains": [ 00:32:16.498 { 00:32:16.498 "dma_device_id": "system", 00:32:16.498 "dma_device_type": 1 00:32:16.498 }, 00:32:16.498 { 00:32:16.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.498 "dma_device_type": 2 00:32:16.498 } 00:32:16.498 ], 00:32:16.498 "driver_specific": {} 00:32:16.498 } 00:32:16.498 ] 00:32:16.498 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:16.498 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:32:16.498 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:32:16.498 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:16.756 BaseBdev4 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:16.756 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:17.014 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:17.272 [ 00:32:17.272 { 00:32:17.272 "name": "BaseBdev4", 00:32:17.272 "aliases": [ 00:32:17.272 "c812756d-3682-4737-96c8-409ace525360" 00:32:17.272 ], 00:32:17.272 "product_name": "Malloc disk", 00:32:17.272 "block_size": 512, 00:32:17.272 "num_blocks": 65536, 00:32:17.272 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:17.272 "assigned_rate_limits": { 00:32:17.272 "rw_ios_per_sec": 0, 00:32:17.272 "rw_mbytes_per_sec": 0, 00:32:17.272 "r_mbytes_per_sec": 0, 00:32:17.272 "w_mbytes_per_sec": 0 00:32:17.272 }, 00:32:17.272 "claimed": false, 00:32:17.273 "zoned": false, 00:32:17.273 "supported_io_types": { 00:32:17.273 "read": true, 00:32:17.273 "write": true, 00:32:17.273 "unmap": true, 00:32:17.273 "write_zeroes": true, 00:32:17.273 "flush": true, 00:32:17.273 "reset": true, 00:32:17.273 "compare": false, 00:32:17.273 "compare_and_write": false, 00:32:17.273 "abort": true, 00:32:17.273 "nvme_admin": false, 00:32:17.273 "nvme_io": false 00:32:17.273 }, 00:32:17.273 "memory_domains": [ 00:32:17.273 { 00:32:17.273 "dma_device_id": "system", 00:32:17.273 "dma_device_type": 1 00:32:17.273 }, 00:32:17.273 { 00:32:17.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.273 "dma_device_type": 2 00:32:17.273 } 00:32:17.273 ], 00:32:17.273 "driver_specific": {} 00:32:17.273 } 00:32:17.273 ] 00:32:17.273 11:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:17.273 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:32:17.273 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:32:17.273 11:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:17.530 [2024-05-15 11:25:36.022178] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:17.530 [2024-05-15 11:25:36.022272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:17.531 [2024-05-15 11:25:36.022321] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:17.531 [2024-05-15 11:25:36.024035] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:17.531 [2024-05-15 11:25:36.024092] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.531 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:17.789 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:17.789 "name": "Existed_Raid", 00:32:17.789 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:17.789 "strip_size_kb": 64, 00:32:17.789 "state": "configuring", 00:32:17.789 "raid_level": "concat", 00:32:17.789 "superblock": true, 00:32:17.789 "num_base_bdevs": 4, 00:32:17.789 "num_base_bdevs_discovered": 3, 00:32:17.789 "num_base_bdevs_operational": 4, 00:32:17.789 "base_bdevs_list": [ 00:32:17.789 { 00:32:17.789 "name": "BaseBdev1", 00:32:17.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.789 "is_configured": false, 00:32:17.789 "data_offset": 0, 00:32:17.789 "data_size": 0 00:32:17.789 }, 00:32:17.789 { 00:32:17.789 "name": "BaseBdev2", 00:32:17.789 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:17.789 "is_configured": true, 00:32:17.789 "data_offset": 2048, 00:32:17.789 "data_size": 63488 00:32:17.789 }, 00:32:17.789 { 00:32:17.789 "name": "BaseBdev3", 00:32:17.789 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:17.789 "is_configured": true, 00:32:17.789 "data_offset": 2048, 00:32:17.789 "data_size": 63488 00:32:17.789 }, 00:32:17.789 { 00:32:17.789 "name": "BaseBdev4", 00:32:17.789 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:17.789 "is_configured": true, 00:32:17.789 "data_offset": 2048, 00:32:17.789 "data_size": 63488 00:32:17.789 } 00:32:17.789 ] 00:32:17.789 }' 00:32:17.789 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:17.789 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.355 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:18.613 [2024-05-15 11:25:37.162308] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:18.613 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:18.613 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:18.613 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.614 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.872 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:18.872 "name": "Existed_Raid", 00:32:18.872 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:18.872 "strip_size_kb": 64, 00:32:18.872 "state": "configuring", 00:32:18.872 "raid_level": "concat", 00:32:18.872 "superblock": true, 00:32:18.872 "num_base_bdevs": 4, 00:32:18.872 "num_base_bdevs_discovered": 2, 00:32:18.872 "num_base_bdevs_operational": 4, 00:32:18.872 "base_bdevs_list": [ 00:32:18.872 { 00:32:18.872 "name": "BaseBdev1", 00:32:18.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.872 "is_configured": false, 00:32:18.872 "data_offset": 0, 00:32:18.872 "data_size": 0 00:32:18.872 }, 00:32:18.872 { 00:32:18.872 "name": null, 00:32:18.872 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:18.872 "is_configured": false, 00:32:18.872 "data_offset": 2048, 00:32:18.872 "data_size": 63488 00:32:18.872 }, 00:32:18.872 { 00:32:18.872 "name": "BaseBdev3", 00:32:18.872 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:18.872 "is_configured": true, 00:32:18.872 "data_offset": 2048, 00:32:18.872 "data_size": 63488 00:32:18.872 }, 00:32:18.872 { 00:32:18.872 "name": "BaseBdev4", 00:32:18.872 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:18.872 "is_configured": true, 00:32:18.872 "data_offset": 2048, 00:32:18.872 "data_size": 63488 00:32:18.872 } 00:32:18.872 ] 00:32:18.872 }' 00:32:18.872 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:18.872 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.808 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.808 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:19.808 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:32:19.808 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:20.072 BaseBdev1 00:32:20.072 [2024-05-15 11:25:38.670063] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:20.072 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:20.330 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:20.589 [ 00:32:20.589 { 00:32:20.589 "name": "BaseBdev1", 00:32:20.589 "aliases": [ 00:32:20.589 "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8" 00:32:20.589 ], 00:32:20.589 "product_name": "Malloc disk", 00:32:20.589 "block_size": 512, 00:32:20.589 "num_blocks": 65536, 00:32:20.589 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:20.589 "assigned_rate_limits": { 00:32:20.589 "rw_ios_per_sec": 0, 00:32:20.589 "rw_mbytes_per_sec": 0, 00:32:20.589 "r_mbytes_per_sec": 0, 00:32:20.589 "w_mbytes_per_sec": 0 00:32:20.589 }, 00:32:20.589 "claimed": true, 00:32:20.589 "claim_type": "exclusive_write", 00:32:20.589 "zoned": false, 00:32:20.589 "supported_io_types": { 00:32:20.589 "read": true, 00:32:20.589 "write": true, 00:32:20.589 "unmap": true, 00:32:20.589 "write_zeroes": true, 00:32:20.589 "flush": true, 00:32:20.589 "reset": true, 00:32:20.589 "compare": false, 00:32:20.589 "compare_and_write": false, 00:32:20.589 "abort": true, 00:32:20.589 "nvme_admin": false, 00:32:20.589 "nvme_io": false 00:32:20.589 }, 00:32:20.589 "memory_domains": [ 00:32:20.589 { 00:32:20.589 "dma_device_id": "system", 00:32:20.589 "dma_device_type": 1 00:32:20.589 }, 00:32:20.589 { 00:32:20.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.589 "dma_device_type": 2 00:32:20.589 } 00:32:20.589 ], 00:32:20.589 "driver_specific": {} 00:32:20.589 } 00:32:20.589 ] 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.589 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.848 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:20.848 "name": "Existed_Raid", 00:32:20.848 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:20.848 "strip_size_kb": 64, 00:32:20.848 "state": "configuring", 00:32:20.848 "raid_level": "concat", 00:32:20.848 "superblock": true, 00:32:20.848 "num_base_bdevs": 4, 00:32:20.848 "num_base_bdevs_discovered": 3, 00:32:20.848 "num_base_bdevs_operational": 4, 00:32:20.848 "base_bdevs_list": [ 00:32:20.848 { 00:32:20.848 "name": "BaseBdev1", 00:32:20.848 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:20.848 "is_configured": true, 00:32:20.848 "data_offset": 2048, 00:32:20.848 "data_size": 63488 00:32:20.848 }, 00:32:20.848 { 00:32:20.848 "name": null, 00:32:20.848 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:20.848 "is_configured": false, 00:32:20.848 "data_offset": 2048, 00:32:20.848 "data_size": 63488 00:32:20.848 }, 00:32:20.848 { 00:32:20.848 "name": "BaseBdev3", 00:32:20.848 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:20.848 "is_configured": true, 00:32:20.848 "data_offset": 2048, 00:32:20.848 "data_size": 63488 00:32:20.848 }, 00:32:20.848 { 00:32:20.848 "name": "BaseBdev4", 00:32:20.848 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:20.848 "is_configured": true, 00:32:20.848 "data_offset": 2048, 00:32:20.848 "data_size": 63488 00:32:20.848 } 00:32:20.848 ] 00:32:20.848 }' 00:32:20.848 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:20.848 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.414 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.414 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:21.673 [2024-05-15 11:25:40.254462] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.673 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.931 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:21.931 "name": "Existed_Raid", 00:32:21.931 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:21.931 "strip_size_kb": 64, 00:32:21.931 "state": "configuring", 00:32:21.931 "raid_level": "concat", 00:32:21.931 "superblock": true, 00:32:21.931 "num_base_bdevs": 4, 00:32:21.931 "num_base_bdevs_discovered": 2, 00:32:21.931 "num_base_bdevs_operational": 4, 00:32:21.931 "base_bdevs_list": [ 00:32:21.931 { 00:32:21.931 "name": "BaseBdev1", 00:32:21.931 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:21.931 "is_configured": true, 00:32:21.931 "data_offset": 2048, 00:32:21.931 "data_size": 63488 00:32:21.931 }, 00:32:21.931 { 00:32:21.931 "name": null, 00:32:21.931 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:21.931 "is_configured": false, 00:32:21.931 "data_offset": 2048, 00:32:21.931 "data_size": 63488 00:32:21.931 }, 00:32:21.931 { 00:32:21.931 "name": null, 00:32:21.931 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:21.931 "is_configured": false, 00:32:21.931 "data_offset": 2048, 00:32:21.931 "data_size": 63488 00:32:21.931 }, 00:32:21.931 { 00:32:21.931 "name": "BaseBdev4", 00:32:21.931 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:21.931 "is_configured": true, 00:32:21.931 "data_offset": 2048, 00:32:21.931 "data_size": 63488 00:32:21.931 } 00:32:21.931 ] 00:32:21.931 }' 00:32:21.931 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:21.931 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.497 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:22.497 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.755 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:32:22.755 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:23.014 [2024-05-15 11:25:41.534708] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.015 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.273 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:23.273 "name": "Existed_Raid", 00:32:23.273 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:23.273 "strip_size_kb": 64, 00:32:23.273 "state": "configuring", 00:32:23.273 "raid_level": "concat", 00:32:23.273 "superblock": true, 00:32:23.273 "num_base_bdevs": 4, 00:32:23.273 "num_base_bdevs_discovered": 3, 00:32:23.273 "num_base_bdevs_operational": 4, 00:32:23.273 "base_bdevs_list": [ 00:32:23.273 { 00:32:23.273 "name": "BaseBdev1", 00:32:23.273 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:23.273 "is_configured": true, 00:32:23.273 "data_offset": 2048, 00:32:23.273 "data_size": 63488 00:32:23.273 }, 00:32:23.273 { 00:32:23.273 "name": null, 00:32:23.273 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:23.273 "is_configured": false, 00:32:23.273 "data_offset": 2048, 00:32:23.273 "data_size": 63488 00:32:23.273 }, 00:32:23.273 { 00:32:23.273 "name": "BaseBdev3", 00:32:23.273 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:23.273 "is_configured": true, 00:32:23.273 "data_offset": 2048, 00:32:23.273 "data_size": 63488 00:32:23.273 }, 00:32:23.273 { 00:32:23.273 "name": "BaseBdev4", 00:32:23.273 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:23.273 "is_configured": true, 00:32:23.273 "data_offset": 2048, 00:32:23.273 "data_size": 63488 00:32:23.273 } 00:32:23.273 ] 00:32:23.273 }' 00:32:23.273 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:23.273 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.839 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.839 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:24.098 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:32:24.098 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:24.356 [2024-05-15 11:25:42.898940] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:24.356 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:24.357 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:24.617 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.617 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.617 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:24.617 "name": "Existed_Raid", 00:32:24.617 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:24.617 "strip_size_kb": 64, 00:32:24.617 "state": "configuring", 00:32:24.617 "raid_level": "concat", 00:32:24.617 "superblock": true, 00:32:24.617 "num_base_bdevs": 4, 00:32:24.617 "num_base_bdevs_discovered": 2, 00:32:24.617 "num_base_bdevs_operational": 4, 00:32:24.617 "base_bdevs_list": [ 00:32:24.617 { 00:32:24.617 "name": null, 00:32:24.617 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:24.617 "is_configured": false, 00:32:24.617 "data_offset": 2048, 00:32:24.617 "data_size": 63488 00:32:24.617 }, 00:32:24.617 { 00:32:24.617 "name": null, 00:32:24.617 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:24.617 "is_configured": false, 00:32:24.617 "data_offset": 2048, 00:32:24.617 "data_size": 63488 00:32:24.617 }, 00:32:24.617 { 00:32:24.617 "name": "BaseBdev3", 00:32:24.617 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:24.617 "is_configured": true, 00:32:24.617 "data_offset": 2048, 00:32:24.617 "data_size": 63488 00:32:24.617 }, 00:32:24.617 { 00:32:24.617 "name": "BaseBdev4", 00:32:24.617 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:24.617 "is_configured": true, 00:32:24.617 "data_offset": 2048, 00:32:24.617 "data_size": 63488 00:32:24.617 } 00:32:24.617 ] 00:32:24.617 }' 00:32:24.617 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:24.617 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:25.551 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.551 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:25.551 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:32:25.551 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:25.809 [2024-05-15 11:25:44.324551] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.809 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.067 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:26.067 "name": "Existed_Raid", 00:32:26.067 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:26.067 "strip_size_kb": 64, 00:32:26.067 "state": "configuring", 00:32:26.067 "raid_level": "concat", 00:32:26.067 "superblock": true, 00:32:26.067 "num_base_bdevs": 4, 00:32:26.067 "num_base_bdevs_discovered": 3, 00:32:26.067 "num_base_bdevs_operational": 4, 00:32:26.067 "base_bdevs_list": [ 00:32:26.067 { 00:32:26.067 "name": null, 00:32:26.067 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:26.067 "is_configured": false, 00:32:26.067 "data_offset": 2048, 00:32:26.067 "data_size": 63488 00:32:26.067 }, 00:32:26.067 { 00:32:26.067 "name": "BaseBdev2", 00:32:26.067 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:26.067 "is_configured": true, 00:32:26.067 "data_offset": 2048, 00:32:26.067 "data_size": 63488 00:32:26.067 }, 00:32:26.067 { 00:32:26.067 "name": "BaseBdev3", 00:32:26.067 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:26.067 "is_configured": true, 00:32:26.067 "data_offset": 2048, 00:32:26.067 "data_size": 63488 00:32:26.067 }, 00:32:26.067 { 00:32:26.067 "name": "BaseBdev4", 00:32:26.067 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:26.067 "is_configured": true, 00:32:26.067 "data_offset": 2048, 00:32:26.067 "data_size": 63488 00:32:26.067 } 00:32:26.067 ] 00:32:26.067 }' 00:32:26.067 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:26.067 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:27.001 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.001 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:27.001 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:32:27.001 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.001 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:27.259 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 22972ac6-42d3-42d4-b26a-5e3a03d9f8d8 00:32:27.516 NewBaseBdev 00:32:27.516 [2024-05-15 11:25:46.024996] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:27.516 [2024-05-15 11:25:46.025177] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:32:27.516 [2024-05-15 11:25:46.025193] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:27.516 [2024-05-15 11:25:46.025295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:32:27.516 [2024-05-15 11:25:46.025501] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:32:27.516 [2024-05-15 11:25:46.025517] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:32:27.516 [2024-05-15 11:25:46.025622] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.516 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:27.517 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:27.776 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:28.032 [ 00:32:28.032 { 00:32:28.032 "name": "NewBaseBdev", 00:32:28.032 "aliases": [ 00:32:28.032 "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8" 00:32:28.032 ], 00:32:28.032 "product_name": "Malloc disk", 00:32:28.032 "block_size": 512, 00:32:28.032 "num_blocks": 65536, 00:32:28.032 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:28.032 "assigned_rate_limits": { 00:32:28.032 "rw_ios_per_sec": 0, 00:32:28.032 "rw_mbytes_per_sec": 0, 00:32:28.032 "r_mbytes_per_sec": 0, 00:32:28.032 "w_mbytes_per_sec": 0 00:32:28.032 }, 00:32:28.032 "claimed": true, 00:32:28.032 "claim_type": "exclusive_write", 00:32:28.032 "zoned": false, 00:32:28.032 "supported_io_types": { 00:32:28.032 "read": true, 00:32:28.032 "write": true, 00:32:28.032 "unmap": true, 00:32:28.032 "write_zeroes": true, 00:32:28.032 "flush": true, 00:32:28.032 "reset": true, 00:32:28.032 "compare": false, 00:32:28.032 "compare_and_write": false, 00:32:28.032 "abort": true, 00:32:28.032 "nvme_admin": false, 00:32:28.032 "nvme_io": false 00:32:28.032 }, 00:32:28.032 "memory_domains": [ 00:32:28.032 { 00:32:28.032 "dma_device_id": "system", 00:32:28.032 "dma_device_type": 1 00:32:28.032 }, 00:32:28.032 { 00:32:28.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.032 "dma_device_type": 2 00:32:28.032 } 00:32:28.032 ], 00:32:28.032 "driver_specific": {} 00:32:28.032 } 00:32:28.032 ] 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:28.032 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:28.033 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:28.033 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.033 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.291 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:28.291 "name": "Existed_Raid", 00:32:28.291 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:28.291 "strip_size_kb": 64, 00:32:28.291 "state": "online", 00:32:28.291 "raid_level": "concat", 00:32:28.291 "superblock": true, 00:32:28.291 "num_base_bdevs": 4, 00:32:28.291 "num_base_bdevs_discovered": 4, 00:32:28.291 "num_base_bdevs_operational": 4, 00:32:28.291 "base_bdevs_list": [ 00:32:28.291 { 00:32:28.291 "name": "NewBaseBdev", 00:32:28.291 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:28.291 "is_configured": true, 00:32:28.291 "data_offset": 2048, 00:32:28.291 "data_size": 63488 00:32:28.291 }, 00:32:28.291 { 00:32:28.291 "name": "BaseBdev2", 00:32:28.291 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:28.291 "is_configured": true, 00:32:28.291 "data_offset": 2048, 00:32:28.291 "data_size": 63488 00:32:28.291 }, 00:32:28.291 { 00:32:28.291 "name": "BaseBdev3", 00:32:28.291 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:28.291 "is_configured": true, 00:32:28.291 "data_offset": 2048, 00:32:28.291 "data_size": 63488 00:32:28.291 }, 00:32:28.291 { 00:32:28.291 "name": "BaseBdev4", 00:32:28.291 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:28.291 "is_configured": true, 00:32:28.291 "data_offset": 2048, 00:32:28.291 "data_size": 63488 00:32:28.291 } 00:32:28.291 ] 00:32:28.291 }' 00:32:28.291 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:28.291 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:28.856 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:32:29.115 [2024-05-15 11:25:47.509417] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:32:29.115 "name": "Existed_Raid", 00:32:29.115 "aliases": [ 00:32:29.115 "823d5a41-74d1-491e-bbe1-04c5cb5085d8" 00:32:29.115 ], 00:32:29.115 "product_name": "Raid Volume", 00:32:29.115 "block_size": 512, 00:32:29.115 "num_blocks": 253952, 00:32:29.115 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:29.115 "assigned_rate_limits": { 00:32:29.115 "rw_ios_per_sec": 0, 00:32:29.115 "rw_mbytes_per_sec": 0, 00:32:29.115 "r_mbytes_per_sec": 0, 00:32:29.115 "w_mbytes_per_sec": 0 00:32:29.115 }, 00:32:29.115 "claimed": false, 00:32:29.115 "zoned": false, 00:32:29.115 "supported_io_types": { 00:32:29.115 "read": true, 00:32:29.115 "write": true, 00:32:29.115 "unmap": true, 00:32:29.115 "write_zeroes": true, 00:32:29.115 "flush": true, 00:32:29.115 "reset": true, 00:32:29.115 "compare": false, 00:32:29.115 "compare_and_write": false, 00:32:29.115 "abort": false, 00:32:29.115 "nvme_admin": false, 00:32:29.115 "nvme_io": false 00:32:29.115 }, 00:32:29.115 "memory_domains": [ 00:32:29.115 { 00:32:29.115 "dma_device_id": "system", 00:32:29.115 "dma_device_type": 1 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.115 "dma_device_type": 2 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "system", 00:32:29.115 "dma_device_type": 1 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.115 "dma_device_type": 2 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "system", 00:32:29.115 "dma_device_type": 1 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.115 "dma_device_type": 2 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "system", 00:32:29.115 "dma_device_type": 1 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.115 "dma_device_type": 2 00:32:29.115 } 00:32:29.115 ], 00:32:29.115 "driver_specific": { 00:32:29.115 "raid": { 00:32:29.115 "uuid": "823d5a41-74d1-491e-bbe1-04c5cb5085d8", 00:32:29.115 "strip_size_kb": 64, 00:32:29.115 "state": "online", 00:32:29.115 "raid_level": "concat", 00:32:29.115 "superblock": true, 00:32:29.115 "num_base_bdevs": 4, 00:32:29.115 "num_base_bdevs_discovered": 4, 00:32:29.115 "num_base_bdevs_operational": 4, 00:32:29.115 "base_bdevs_list": [ 00:32:29.115 { 00:32:29.115 "name": "NewBaseBdev", 00:32:29.115 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:29.115 "is_configured": true, 00:32:29.115 "data_offset": 2048, 00:32:29.115 "data_size": 63488 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "name": "BaseBdev2", 00:32:29.115 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:29.115 "is_configured": true, 00:32:29.115 "data_offset": 2048, 00:32:29.115 "data_size": 63488 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "name": "BaseBdev3", 00:32:29.115 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:29.115 "is_configured": true, 00:32:29.115 "data_offset": 2048, 00:32:29.115 "data_size": 63488 00:32:29.115 }, 00:32:29.115 { 00:32:29.115 "name": "BaseBdev4", 00:32:29.115 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:29.115 "is_configured": true, 00:32:29.115 "data_offset": 2048, 00:32:29.115 "data_size": 63488 00:32:29.115 } 00:32:29.115 ] 00:32:29.115 } 00:32:29.115 } 00:32:29.115 }' 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:32:29.115 BaseBdev2 00:32:29.115 BaseBdev3 00:32:29.115 BaseBdev4' 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:29.115 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:29.374 "name": "NewBaseBdev", 00:32:29.374 "aliases": [ 00:32:29.374 "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8" 00:32:29.374 ], 00:32:29.374 "product_name": "Malloc disk", 00:32:29.374 "block_size": 512, 00:32:29.374 "num_blocks": 65536, 00:32:29.374 "uuid": "22972ac6-42d3-42d4-b26a-5e3a03d9f8d8", 00:32:29.374 "assigned_rate_limits": { 00:32:29.374 "rw_ios_per_sec": 0, 00:32:29.374 "rw_mbytes_per_sec": 0, 00:32:29.374 "r_mbytes_per_sec": 0, 00:32:29.374 "w_mbytes_per_sec": 0 00:32:29.374 }, 00:32:29.374 "claimed": true, 00:32:29.374 "claim_type": "exclusive_write", 00:32:29.374 "zoned": false, 00:32:29.374 "supported_io_types": { 00:32:29.374 "read": true, 00:32:29.374 "write": true, 00:32:29.374 "unmap": true, 00:32:29.374 "write_zeroes": true, 00:32:29.374 "flush": true, 00:32:29.374 "reset": true, 00:32:29.374 "compare": false, 00:32:29.374 "compare_and_write": false, 00:32:29.374 "abort": true, 00:32:29.374 "nvme_admin": false, 00:32:29.374 "nvme_io": false 00:32:29.374 }, 00:32:29.374 "memory_domains": [ 00:32:29.374 { 00:32:29.374 "dma_device_id": "system", 00:32:29.374 "dma_device_type": 1 00:32:29.374 }, 00:32:29.374 { 00:32:29.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.374 "dma_device_type": 2 00:32:29.374 } 00:32:29.374 ], 00:32:29.374 "driver_specific": {} 00:32:29.374 }' 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:29.374 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:29.632 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:29.891 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:29.891 "name": "BaseBdev2", 00:32:29.891 "aliases": [ 00:32:29.891 "0fcdba27-dd98-449d-8ebf-de45ee770456" 00:32:29.891 ], 00:32:29.891 "product_name": "Malloc disk", 00:32:29.891 "block_size": 512, 00:32:29.891 "num_blocks": 65536, 00:32:29.891 "uuid": "0fcdba27-dd98-449d-8ebf-de45ee770456", 00:32:29.891 "assigned_rate_limits": { 00:32:29.891 "rw_ios_per_sec": 0, 00:32:29.891 "rw_mbytes_per_sec": 0, 00:32:29.891 "r_mbytes_per_sec": 0, 00:32:29.891 "w_mbytes_per_sec": 0 00:32:29.891 }, 00:32:29.891 "claimed": true, 00:32:29.891 "claim_type": "exclusive_write", 00:32:29.891 "zoned": false, 00:32:29.891 "supported_io_types": { 00:32:29.891 "read": true, 00:32:29.891 "write": true, 00:32:29.891 "unmap": true, 00:32:29.891 "write_zeroes": true, 00:32:29.891 "flush": true, 00:32:29.891 "reset": true, 00:32:29.891 "compare": false, 00:32:29.891 "compare_and_write": false, 00:32:29.891 "abort": true, 00:32:29.891 "nvme_admin": false, 00:32:29.891 "nvme_io": false 00:32:29.891 }, 00:32:29.891 "memory_domains": [ 00:32:29.891 { 00:32:29.891 "dma_device_id": "system", 00:32:29.891 "dma_device_type": 1 00:32:29.891 }, 00:32:29.891 { 00:32:29.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.891 "dma_device_type": 2 00:32:29.891 } 00:32:29.891 ], 00:32:29.891 "driver_specific": {} 00:32:29.891 }' 00:32:29.891 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:29.891 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:30.150 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:30.409 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:30.409 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:30.409 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:30.409 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:30.409 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:30.667 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:30.667 "name": "BaseBdev3", 00:32:30.667 "aliases": [ 00:32:30.667 "a547105d-bd2b-4d26-8dcd-7f2999803034" 00:32:30.667 ], 00:32:30.667 "product_name": "Malloc disk", 00:32:30.667 "block_size": 512, 00:32:30.667 "num_blocks": 65536, 00:32:30.667 "uuid": "a547105d-bd2b-4d26-8dcd-7f2999803034", 00:32:30.667 "assigned_rate_limits": { 00:32:30.667 "rw_ios_per_sec": 0, 00:32:30.667 "rw_mbytes_per_sec": 0, 00:32:30.667 "r_mbytes_per_sec": 0, 00:32:30.667 "w_mbytes_per_sec": 0 00:32:30.667 }, 00:32:30.667 "claimed": true, 00:32:30.667 "claim_type": "exclusive_write", 00:32:30.667 "zoned": false, 00:32:30.667 "supported_io_types": { 00:32:30.667 "read": true, 00:32:30.667 "write": true, 00:32:30.667 "unmap": true, 00:32:30.667 "write_zeroes": true, 00:32:30.667 "flush": true, 00:32:30.667 "reset": true, 00:32:30.667 "compare": false, 00:32:30.667 "compare_and_write": false, 00:32:30.667 "abort": true, 00:32:30.667 "nvme_admin": false, 00:32:30.667 "nvme_io": false 00:32:30.667 }, 00:32:30.667 "memory_domains": [ 00:32:30.667 { 00:32:30.667 "dma_device_id": "system", 00:32:30.667 "dma_device_type": 1 00:32:30.667 }, 00:32:30.667 { 00:32:30.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.667 "dma_device_type": 2 00:32:30.667 } 00:32:30.667 ], 00:32:30.667 "driver_specific": {} 00:32:30.667 }' 00:32:30.667 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:30.668 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:30.926 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:31.185 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:31.185 "name": "BaseBdev4", 00:32:31.185 "aliases": [ 00:32:31.185 "c812756d-3682-4737-96c8-409ace525360" 00:32:31.185 ], 00:32:31.185 "product_name": "Malloc disk", 00:32:31.185 "block_size": 512, 00:32:31.185 "num_blocks": 65536, 00:32:31.185 "uuid": "c812756d-3682-4737-96c8-409ace525360", 00:32:31.185 "assigned_rate_limits": { 00:32:31.185 "rw_ios_per_sec": 0, 00:32:31.185 "rw_mbytes_per_sec": 0, 00:32:31.185 "r_mbytes_per_sec": 0, 00:32:31.185 "w_mbytes_per_sec": 0 00:32:31.185 }, 00:32:31.185 "claimed": true, 00:32:31.185 "claim_type": "exclusive_write", 00:32:31.185 "zoned": false, 00:32:31.185 "supported_io_types": { 00:32:31.185 "read": true, 00:32:31.185 "write": true, 00:32:31.185 "unmap": true, 00:32:31.185 "write_zeroes": true, 00:32:31.185 "flush": true, 00:32:31.185 "reset": true, 00:32:31.185 "compare": false, 00:32:31.185 "compare_and_write": false, 00:32:31.185 "abort": true, 00:32:31.185 "nvme_admin": false, 00:32:31.185 "nvme_io": false 00:32:31.185 }, 00:32:31.185 "memory_domains": [ 00:32:31.185 { 00:32:31.185 "dma_device_id": "system", 00:32:31.185 "dma_device_type": 1 00:32:31.185 }, 00:32:31.185 { 00:32:31.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.185 "dma_device_type": 2 00:32:31.185 } 00:32:31.185 ], 00:32:31.185 "driver_specific": {} 00:32:31.185 }' 00:32:31.185 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:31.185 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:31.185 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:31.185 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:31.466 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:31.466 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:31.466 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:31.466 11:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:31.466 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:31.466 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:31.466 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:31.724 [2024-05-15 11:25:50.309838] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:31.724 [2024-05-15 11:25:50.309885] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:31.724 [2024-05-15 11:25:50.309950] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:31.724 [2024-05-15 11:25:50.309997] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:31.724 [2024-05-15 11:25:50.310009] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 68384 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 68384 ']' 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 68384 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68384 00:32:31.724 killing process with pid 68384 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68384' 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 68384 00:32:31.724 11:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 68384 00:32:31.724 [2024-05-15 11:25:50.342655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:32.291 [2024-05-15 11:25:50.656382] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:33.222 ************************************ 00:32:33.222 END TEST raid_state_function_test_sb 00:32:33.222 ************************************ 00:32:33.222 11:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:32:33.222 00:32:33.222 real 0m34.534s 00:32:33.222 user 1m5.233s 00:32:33.222 sys 0m3.510s 00:32:33.222 11:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:33.222 11:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.480 11:25:51 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:32:33.480 11:25:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:32:33.480 11:25:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:33.480 11:25:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:33.480 ************************************ 00:32:33.480 START TEST raid_superblock_test 00:32:33.480 ************************************ 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:33.480 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69496 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69496 /var/tmp/spdk-raid.sock 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 69496 ']' 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:33.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:33.481 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 [2024-05-15 11:25:52.074048] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:32:33.481 [2024-05-15 11:25:52.074268] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69496 ] 00:32:33.739 [2024-05-15 11:25:52.243276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.998 [2024-05-15 11:25:52.487967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.257 [2024-05-15 11:25:52.681890] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:34.257 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:32:34.516 malloc1 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:34.773 [2024-05-15 11:25:53.330561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:34.773 [2024-05-15 11:25:53.330693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.773 [2024-05-15 11:25:53.330763] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:32:34.773 [2024-05-15 11:25:53.330802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.773 [2024-05-15 11:25:53.332926] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.773 [2024-05-15 11:25:53.332964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:34.773 pt1 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:34.773 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:32:35.030 malloc2 00:32:35.030 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:35.288 [2024-05-15 11:25:53.757421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:35.288 [2024-05-15 11:25:53.757553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.288 [2024-05-15 11:25:53.757608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:32:35.288 [2024-05-15 11:25:53.757650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.288 [2024-05-15 11:25:53.759532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.288 [2024-05-15 11:25:53.759610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:35.288 pt2 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:35.288 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:32:35.547 malloc3 00:32:35.547 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:35.806 [2024-05-15 11:25:54.220423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:35.806 [2024-05-15 11:25:54.220519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.806 [2024-05-15 11:25:54.220568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:32:35.806 [2024-05-15 11:25:54.220614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.806 [2024-05-15 11:25:54.222842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.806 [2024-05-15 11:25:54.222890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:35.806 pt3 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:35.806 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:32:36.064 malloc4 00:32:36.064 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:36.064 [2024-05-15 11:25:54.636483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:36.064 [2024-05-15 11:25:54.636624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:36.064 [2024-05-15 11:25:54.636676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:32:36.064 [2024-05-15 11:25:54.636732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:36.064 [2024-05-15 11:25:54.638872] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:36.064 [2024-05-15 11:25:54.638952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:36.064 pt4 00:32:36.064 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:36.064 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:36.064 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:32:36.323 [2024-05-15 11:25:54.824616] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:36.323 [2024-05-15 11:25:54.826406] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:36.323 [2024-05-15 11:25:54.826457] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:36.323 [2024-05-15 11:25:54.826513] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:36.323 [2024-05-15 11:25:54.826642] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:32:36.323 [2024-05-15 11:25:54.826656] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:36.323 [2024-05-15 11:25:54.826767] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:32:36.323 [2024-05-15 11:25:54.827034] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:32:36.323 [2024-05-15 11:25:54.827050] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:32:36.323 [2024-05-15 11:25:54.827193] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.323 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.582 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:36.582 "name": "raid_bdev1", 00:32:36.582 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:36.582 "strip_size_kb": 64, 00:32:36.582 "state": "online", 00:32:36.582 "raid_level": "concat", 00:32:36.582 "superblock": true, 00:32:36.582 "num_base_bdevs": 4, 00:32:36.582 "num_base_bdevs_discovered": 4, 00:32:36.582 "num_base_bdevs_operational": 4, 00:32:36.582 "base_bdevs_list": [ 00:32:36.582 { 00:32:36.582 "name": "pt1", 00:32:36.582 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:36.582 "is_configured": true, 00:32:36.582 "data_offset": 2048, 00:32:36.582 "data_size": 63488 00:32:36.582 }, 00:32:36.582 { 00:32:36.582 "name": "pt2", 00:32:36.582 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:36.582 "is_configured": true, 00:32:36.582 "data_offset": 2048, 00:32:36.582 "data_size": 63488 00:32:36.582 }, 00:32:36.582 { 00:32:36.582 "name": "pt3", 00:32:36.582 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:36.582 "is_configured": true, 00:32:36.582 "data_offset": 2048, 00:32:36.582 "data_size": 63488 00:32:36.582 }, 00:32:36.582 { 00:32:36.582 "name": "pt4", 00:32:36.582 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:36.582 "is_configured": true, 00:32:36.582 "data_offset": 2048, 00:32:36.582 "data_size": 63488 00:32:36.582 } 00:32:36.582 ] 00:32:36.582 }' 00:32:36.582 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:36.582 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:32:37.148 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:37.407 [2024-05-15 11:25:55.880945] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.407 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:32:37.407 "name": "raid_bdev1", 00:32:37.407 "aliases": [ 00:32:37.407 "92353ad0-62f9-4023-a5b5-6fd9455f5bb5" 00:32:37.407 ], 00:32:37.407 "product_name": "Raid Volume", 00:32:37.407 "block_size": 512, 00:32:37.407 "num_blocks": 253952, 00:32:37.407 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:37.407 "assigned_rate_limits": { 00:32:37.407 "rw_ios_per_sec": 0, 00:32:37.407 "rw_mbytes_per_sec": 0, 00:32:37.407 "r_mbytes_per_sec": 0, 00:32:37.407 "w_mbytes_per_sec": 0 00:32:37.407 }, 00:32:37.407 "claimed": false, 00:32:37.407 "zoned": false, 00:32:37.407 "supported_io_types": { 00:32:37.407 "read": true, 00:32:37.407 "write": true, 00:32:37.407 "unmap": true, 00:32:37.407 "write_zeroes": true, 00:32:37.407 "flush": true, 00:32:37.407 "reset": true, 00:32:37.407 "compare": false, 00:32:37.407 "compare_and_write": false, 00:32:37.407 "abort": false, 00:32:37.407 "nvme_admin": false, 00:32:37.407 "nvme_io": false 00:32:37.407 }, 00:32:37.407 "memory_domains": [ 00:32:37.407 { 00:32:37.407 "dma_device_id": "system", 00:32:37.407 "dma_device_type": 1 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.407 "dma_device_type": 2 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "system", 00:32:37.407 "dma_device_type": 1 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.407 "dma_device_type": 2 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "system", 00:32:37.407 "dma_device_type": 1 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.407 "dma_device_type": 2 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "system", 00:32:37.407 "dma_device_type": 1 00:32:37.407 }, 00:32:37.407 { 00:32:37.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.407 "dma_device_type": 2 00:32:37.407 } 00:32:37.407 ], 00:32:37.407 "driver_specific": { 00:32:37.408 "raid": { 00:32:37.408 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:37.408 "strip_size_kb": 64, 00:32:37.408 "state": "online", 00:32:37.408 "raid_level": "concat", 00:32:37.408 "superblock": true, 00:32:37.408 "num_base_bdevs": 4, 00:32:37.408 "num_base_bdevs_discovered": 4, 00:32:37.408 "num_base_bdevs_operational": 4, 00:32:37.408 "base_bdevs_list": [ 00:32:37.408 { 00:32:37.408 "name": "pt1", 00:32:37.408 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:37.408 "is_configured": true, 00:32:37.408 "data_offset": 2048, 00:32:37.408 "data_size": 63488 00:32:37.408 }, 00:32:37.408 { 00:32:37.408 "name": "pt2", 00:32:37.408 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:37.408 "is_configured": true, 00:32:37.408 "data_offset": 2048, 00:32:37.408 "data_size": 63488 00:32:37.408 }, 00:32:37.408 { 00:32:37.408 "name": "pt3", 00:32:37.408 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:37.408 "is_configured": true, 00:32:37.408 "data_offset": 2048, 00:32:37.408 "data_size": 63488 00:32:37.408 }, 00:32:37.408 { 00:32:37.408 "name": "pt4", 00:32:37.408 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:37.408 "is_configured": true, 00:32:37.408 "data_offset": 2048, 00:32:37.408 "data_size": 63488 00:32:37.408 } 00:32:37.408 ] 00:32:37.408 } 00:32:37.408 } 00:32:37.408 }' 00:32:37.408 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:37.408 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:32:37.408 pt2 00:32:37.408 pt3 00:32:37.408 pt4' 00:32:37.408 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:37.408 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:37.408 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:37.667 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:37.667 "name": "pt1", 00:32:37.667 "aliases": [ 00:32:37.667 "474e44ac-a528-5229-acd0-9ae393f81268" 00:32:37.667 ], 00:32:37.667 "product_name": "passthru", 00:32:37.667 "block_size": 512, 00:32:37.667 "num_blocks": 65536, 00:32:37.667 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:37.667 "assigned_rate_limits": { 00:32:37.667 "rw_ios_per_sec": 0, 00:32:37.667 "rw_mbytes_per_sec": 0, 00:32:37.667 "r_mbytes_per_sec": 0, 00:32:37.667 "w_mbytes_per_sec": 0 00:32:37.667 }, 00:32:37.667 "claimed": true, 00:32:37.667 "claim_type": "exclusive_write", 00:32:37.667 "zoned": false, 00:32:37.667 "supported_io_types": { 00:32:37.667 "read": true, 00:32:37.667 "write": true, 00:32:37.667 "unmap": true, 00:32:37.667 "write_zeroes": true, 00:32:37.667 "flush": true, 00:32:37.667 "reset": true, 00:32:37.667 "compare": false, 00:32:37.667 "compare_and_write": false, 00:32:37.667 "abort": true, 00:32:37.667 "nvme_admin": false, 00:32:37.667 "nvme_io": false 00:32:37.667 }, 00:32:37.667 "memory_domains": [ 00:32:37.667 { 00:32:37.667 "dma_device_id": "system", 00:32:37.667 "dma_device_type": 1 00:32:37.667 }, 00:32:37.667 { 00:32:37.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.667 "dma_device_type": 2 00:32:37.667 } 00:32:37.667 ], 00:32:37.667 "driver_specific": { 00:32:37.667 "passthru": { 00:32:37.667 "name": "pt1", 00:32:37.667 "base_bdev_name": "malloc1" 00:32:37.667 } 00:32:37.667 } 00:32:37.667 }' 00:32:37.667 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:37.667 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:37.667 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:37.667 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:37.925 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:37.925 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:37.925 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:37.925 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:38.241 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:38.513 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:38.513 "name": "pt2", 00:32:38.513 "aliases": [ 00:32:38.513 "b01007ed-5129-5893-bb82-b6c7a1cd822b" 00:32:38.513 ], 00:32:38.513 "product_name": "passthru", 00:32:38.513 "block_size": 512, 00:32:38.513 "num_blocks": 65536, 00:32:38.513 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:38.513 "assigned_rate_limits": { 00:32:38.513 "rw_ios_per_sec": 0, 00:32:38.513 "rw_mbytes_per_sec": 0, 00:32:38.513 "r_mbytes_per_sec": 0, 00:32:38.513 "w_mbytes_per_sec": 0 00:32:38.513 }, 00:32:38.513 "claimed": true, 00:32:38.513 "claim_type": "exclusive_write", 00:32:38.513 "zoned": false, 00:32:38.513 "supported_io_types": { 00:32:38.513 "read": true, 00:32:38.513 "write": true, 00:32:38.513 "unmap": true, 00:32:38.513 "write_zeroes": true, 00:32:38.513 "flush": true, 00:32:38.513 "reset": true, 00:32:38.513 "compare": false, 00:32:38.513 "compare_and_write": false, 00:32:38.513 "abort": true, 00:32:38.513 "nvme_admin": false, 00:32:38.513 "nvme_io": false 00:32:38.513 }, 00:32:38.513 "memory_domains": [ 00:32:38.513 { 00:32:38.513 "dma_device_id": "system", 00:32:38.513 "dma_device_type": 1 00:32:38.513 }, 00:32:38.513 { 00:32:38.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.513 "dma_device_type": 2 00:32:38.513 } 00:32:38.513 ], 00:32:38.513 "driver_specific": { 00:32:38.513 "passthru": { 00:32:38.513 "name": "pt2", 00:32:38.513 "base_bdev_name": "malloc2" 00:32:38.513 } 00:32:38.513 } 00:32:38.513 }' 00:32:38.513 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:38.513 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:38.513 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:38.513 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:38.513 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:38.772 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:39.030 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:39.030 "name": "pt3", 00:32:39.030 "aliases": [ 00:32:39.030 "4710670e-0739-5798-889e-dd49b4020526" 00:32:39.030 ], 00:32:39.030 "product_name": "passthru", 00:32:39.030 "block_size": 512, 00:32:39.030 "num_blocks": 65536, 00:32:39.030 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:39.030 "assigned_rate_limits": { 00:32:39.030 "rw_ios_per_sec": 0, 00:32:39.030 "rw_mbytes_per_sec": 0, 00:32:39.030 "r_mbytes_per_sec": 0, 00:32:39.030 "w_mbytes_per_sec": 0 00:32:39.030 }, 00:32:39.030 "claimed": true, 00:32:39.030 "claim_type": "exclusive_write", 00:32:39.030 "zoned": false, 00:32:39.030 "supported_io_types": { 00:32:39.030 "read": true, 00:32:39.030 "write": true, 00:32:39.030 "unmap": true, 00:32:39.030 "write_zeroes": true, 00:32:39.030 "flush": true, 00:32:39.030 "reset": true, 00:32:39.030 "compare": false, 00:32:39.030 "compare_and_write": false, 00:32:39.030 "abort": true, 00:32:39.030 "nvme_admin": false, 00:32:39.030 "nvme_io": false 00:32:39.030 }, 00:32:39.030 "memory_domains": [ 00:32:39.030 { 00:32:39.030 "dma_device_id": "system", 00:32:39.030 "dma_device_type": 1 00:32:39.030 }, 00:32:39.030 { 00:32:39.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.030 "dma_device_type": 2 00:32:39.030 } 00:32:39.030 ], 00:32:39.030 "driver_specific": { 00:32:39.030 "passthru": { 00:32:39.030 "name": "pt3", 00:32:39.030 "base_bdev_name": "malloc3" 00:32:39.030 } 00:32:39.030 } 00:32:39.030 }' 00:32:39.030 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:39.288 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:39.546 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:39.546 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:39.546 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:39.546 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:39.546 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:39.546 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:32:39.546 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:39.805 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:39.805 "name": "pt4", 00:32:39.805 "aliases": [ 00:32:39.805 "2c3b0c3f-6e8f-5673-9a04-705449d9871d" 00:32:39.805 ], 00:32:39.805 "product_name": "passthru", 00:32:39.805 "block_size": 512, 00:32:39.805 "num_blocks": 65536, 00:32:39.805 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:39.805 "assigned_rate_limits": { 00:32:39.805 "rw_ios_per_sec": 0, 00:32:39.805 "rw_mbytes_per_sec": 0, 00:32:39.805 "r_mbytes_per_sec": 0, 00:32:39.805 "w_mbytes_per_sec": 0 00:32:39.805 }, 00:32:39.805 "claimed": true, 00:32:39.805 "claim_type": "exclusive_write", 00:32:39.805 "zoned": false, 00:32:39.805 "supported_io_types": { 00:32:39.805 "read": true, 00:32:39.805 "write": true, 00:32:39.805 "unmap": true, 00:32:39.805 "write_zeroes": true, 00:32:39.805 "flush": true, 00:32:39.805 "reset": true, 00:32:39.805 "compare": false, 00:32:39.805 "compare_and_write": false, 00:32:39.805 "abort": true, 00:32:39.805 "nvme_admin": false, 00:32:39.805 "nvme_io": false 00:32:39.805 }, 00:32:39.805 "memory_domains": [ 00:32:39.805 { 00:32:39.805 "dma_device_id": "system", 00:32:39.805 "dma_device_type": 1 00:32:39.805 }, 00:32:39.805 { 00:32:39.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.805 "dma_device_type": 2 00:32:39.805 } 00:32:39.805 ], 00:32:39.805 "driver_specific": { 00:32:39.805 "passthru": { 00:32:39.805 "name": "pt4", 00:32:39.805 "base_bdev_name": "malloc4" 00:32:39.805 } 00:32:39.805 } 00:32:39.805 }' 00:32:39.805 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:39.805 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:40.064 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:40.323 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:40.323 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:40.323 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:40.323 11:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:40.581 [2024-05-15 11:25:59.021527] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:40.581 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92353ad0-62f9-4023-a5b5-6fd9455f5bb5 00:32:40.581 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92353ad0-62f9-4023-a5b5-6fd9455f5bb5 ']' 00:32:40.581 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:40.581 [2024-05-15 11:25:59.217362] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:40.581 [2024-05-15 11:25:59.217423] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:40.581 [2024-05-15 11:25:59.217529] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:40.581 [2024-05-15 11:25:59.217661] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:40.581 [2024-05-15 11:25:59.217698] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:40.840 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:41.099 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:41.099 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:41.358 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:41.358 11:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:41.629 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:41.629 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:32:41.887 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:41.887 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:32:42.146 [2024-05-15 11:26:00.725618] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:42.146 [2024-05-15 11:26:00.727596] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:42.146 [2024-05-15 11:26:00.727650] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:42.146 [2024-05-15 11:26:00.727682] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:42.146 [2024-05-15 11:26:00.727720] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:42.146 [2024-05-15 11:26:00.727796] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:42.146 [2024-05-15 11:26:00.727848] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:42.146 [2024-05-15 11:26:00.727937] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:42.146 [2024-05-15 11:26:00.727969] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:42.146 [2024-05-15 11:26:00.727980] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:32:42.146 request: 00:32:42.146 { 00:32:42.146 "name": "raid_bdev1", 00:32:42.146 "raid_level": "concat", 00:32:42.146 "base_bdevs": [ 00:32:42.146 "malloc1", 00:32:42.146 "malloc2", 00:32:42.146 "malloc3", 00:32:42.146 "malloc4" 00:32:42.146 ], 00:32:42.146 "strip_size_kb": 64, 00:32:42.146 "superblock": false, 00:32:42.146 "method": "bdev_raid_create", 00:32:42.146 "req_id": 1 00:32:42.146 } 00:32:42.146 Got JSON-RPC error response 00:32:42.146 response: 00:32:42.146 { 00:32:42.146 "code": -17, 00:32:42.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:42.146 } 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.146 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:42.405 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:42.405 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:42.405 11:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:42.664 [2024-05-15 11:26:01.153640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:42.664 [2024-05-15 11:26:01.153784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.664 [2024-05-15 11:26:01.154087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:32:42.664 [2024-05-15 11:26:01.154158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.664 [2024-05-15 11:26:01.155930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.664 [2024-05-15 11:26:01.155995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:42.664 [2024-05-15 11:26:01.156115] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:32:42.664 [2024-05-15 11:26:01.156179] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:42.664 pt1 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.664 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.922 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:42.922 "name": "raid_bdev1", 00:32:42.922 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:42.922 "strip_size_kb": 64, 00:32:42.922 "state": "configuring", 00:32:42.922 "raid_level": "concat", 00:32:42.922 "superblock": true, 00:32:42.922 "num_base_bdevs": 4, 00:32:42.922 "num_base_bdevs_discovered": 1, 00:32:42.922 "num_base_bdevs_operational": 4, 00:32:42.922 "base_bdevs_list": [ 00:32:42.922 { 00:32:42.922 "name": "pt1", 00:32:42.922 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:42.922 "is_configured": true, 00:32:42.922 "data_offset": 2048, 00:32:42.922 "data_size": 63488 00:32:42.922 }, 00:32:42.922 { 00:32:42.922 "name": null, 00:32:42.922 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:42.922 "is_configured": false, 00:32:42.922 "data_offset": 2048, 00:32:42.922 "data_size": 63488 00:32:42.922 }, 00:32:42.922 { 00:32:42.922 "name": null, 00:32:42.922 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:42.922 "is_configured": false, 00:32:42.922 "data_offset": 2048, 00:32:42.922 "data_size": 63488 00:32:42.922 }, 00:32:42.922 { 00:32:42.922 "name": null, 00:32:42.922 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:42.922 "is_configured": false, 00:32:42.922 "data_offset": 2048, 00:32:42.922 "data_size": 63488 00:32:42.922 } 00:32:42.922 ] 00:32:42.922 }' 00:32:42.922 11:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:42.923 11:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.488 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:43.488 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:43.746 [2024-05-15 11:26:02.209803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:43.746 [2024-05-15 11:26:02.210154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:43.746 [2024-05-15 11:26:02.210216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:32:43.746 [2024-05-15 11:26:02.210240] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:43.746 [2024-05-15 11:26:02.210616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:43.746 [2024-05-15 11:26:02.210661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:43.746 [2024-05-15 11:26:02.210752] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:32:43.746 [2024-05-15 11:26:02.210779] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:43.746 pt2 00:32:43.746 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:44.005 [2024-05-15 11:26:02.409913] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.005 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.264 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:44.264 "name": "raid_bdev1", 00:32:44.264 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:44.264 "strip_size_kb": 64, 00:32:44.264 "state": "configuring", 00:32:44.264 "raid_level": "concat", 00:32:44.264 "superblock": true, 00:32:44.264 "num_base_bdevs": 4, 00:32:44.264 "num_base_bdevs_discovered": 1, 00:32:44.264 "num_base_bdevs_operational": 4, 00:32:44.264 "base_bdevs_list": [ 00:32:44.264 { 00:32:44.264 "name": "pt1", 00:32:44.264 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:44.264 "is_configured": true, 00:32:44.264 "data_offset": 2048, 00:32:44.264 "data_size": 63488 00:32:44.264 }, 00:32:44.264 { 00:32:44.264 "name": null, 00:32:44.264 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:44.264 "is_configured": false, 00:32:44.264 "data_offset": 2048, 00:32:44.264 "data_size": 63488 00:32:44.264 }, 00:32:44.264 { 00:32:44.264 "name": null, 00:32:44.264 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:44.264 "is_configured": false, 00:32:44.264 "data_offset": 2048, 00:32:44.264 "data_size": 63488 00:32:44.264 }, 00:32:44.264 { 00:32:44.264 "name": null, 00:32:44.264 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:44.264 "is_configured": false, 00:32:44.264 "data_offset": 2048, 00:32:44.264 "data_size": 63488 00:32:44.264 } 00:32:44.264 ] 00:32:44.264 }' 00:32:44.264 11:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:44.264 11:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.829 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:44.829 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:44.829 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:45.094 [2024-05-15 11:26:03.530120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:45.094 [2024-05-15 11:26:03.530234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.094 [2024-05-15 11:26:03.530286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:32:45.094 [2024-05-15 11:26:03.530324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.094 [2024-05-15 11:26:03.530718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.094 [2024-05-15 11:26:03.530764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:45.094 [2024-05-15 11:26:03.530843] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:32:45.094 [2024-05-15 11:26:03.530868] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:45.094 pt2 00:32:45.094 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:45.094 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:45.094 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:45.365 [2024-05-15 11:26:03.762227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:45.365 [2024-05-15 11:26:03.762392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.365 [2024-05-15 11:26:03.762460] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:32:45.365 [2024-05-15 11:26:03.762500] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.365 [2024-05-15 11:26:03.763271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.365 [2024-05-15 11:26:03.763359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:45.365 [2024-05-15 11:26:03.763514] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:32:45.365 [2024-05-15 11:26:03.763610] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:45.365 pt3 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:45.365 [2024-05-15 11:26:03.966259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:45.365 [2024-05-15 11:26:03.966388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.365 [2024-05-15 11:26:03.966433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:32:45.365 [2024-05-15 11:26:03.966463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.365 [2024-05-15 11:26:03.967068] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.365 [2024-05-15 11:26:03.967137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:45.365 [2024-05-15 11:26:03.967230] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:32:45.365 [2024-05-15 11:26:03.967260] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:45.365 [2024-05-15 11:26:03.967358] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:32:45.365 [2024-05-15 11:26:03.967372] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:45.365 [2024-05-15 11:26:03.967451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:45.365 [2024-05-15 11:26:03.967690] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:32:45.365 [2024-05-15 11:26:03.967705] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:32:45.365 [2024-05-15 11:26:03.967824] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.365 pt4 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.365 11:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.624 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:45.624 "name": "raid_bdev1", 00:32:45.624 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:45.624 "strip_size_kb": 64, 00:32:45.624 "state": "online", 00:32:45.624 "raid_level": "concat", 00:32:45.624 "superblock": true, 00:32:45.624 "num_base_bdevs": 4, 00:32:45.624 "num_base_bdevs_discovered": 4, 00:32:45.624 "num_base_bdevs_operational": 4, 00:32:45.624 "base_bdevs_list": [ 00:32:45.624 { 00:32:45.624 "name": "pt1", 00:32:45.624 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:45.624 "is_configured": true, 00:32:45.624 "data_offset": 2048, 00:32:45.624 "data_size": 63488 00:32:45.624 }, 00:32:45.624 { 00:32:45.624 "name": "pt2", 00:32:45.624 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:45.624 "is_configured": true, 00:32:45.624 "data_offset": 2048, 00:32:45.624 "data_size": 63488 00:32:45.624 }, 00:32:45.624 { 00:32:45.624 "name": "pt3", 00:32:45.624 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:45.624 "is_configured": true, 00:32:45.624 "data_offset": 2048, 00:32:45.624 "data_size": 63488 00:32:45.624 }, 00:32:45.624 { 00:32:45.624 "name": "pt4", 00:32:45.624 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:45.624 "is_configured": true, 00:32:45.624 "data_offset": 2048, 00:32:45.624 "data_size": 63488 00:32:45.624 } 00:32:45.624 ] 00:32:45.624 }' 00:32:45.624 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:45.624 11:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:46.560 11:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:32:46.560 [2024-05-15 11:26:05.030637] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.560 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:32:46.560 "name": "raid_bdev1", 00:32:46.560 "aliases": [ 00:32:46.560 "92353ad0-62f9-4023-a5b5-6fd9455f5bb5" 00:32:46.560 ], 00:32:46.560 "product_name": "Raid Volume", 00:32:46.560 "block_size": 512, 00:32:46.560 "num_blocks": 253952, 00:32:46.561 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:46.561 "assigned_rate_limits": { 00:32:46.561 "rw_ios_per_sec": 0, 00:32:46.561 "rw_mbytes_per_sec": 0, 00:32:46.561 "r_mbytes_per_sec": 0, 00:32:46.561 "w_mbytes_per_sec": 0 00:32:46.561 }, 00:32:46.561 "claimed": false, 00:32:46.561 "zoned": false, 00:32:46.561 "supported_io_types": { 00:32:46.561 "read": true, 00:32:46.561 "write": true, 00:32:46.561 "unmap": true, 00:32:46.561 "write_zeroes": true, 00:32:46.561 "flush": true, 00:32:46.561 "reset": true, 00:32:46.561 "compare": false, 00:32:46.561 "compare_and_write": false, 00:32:46.561 "abort": false, 00:32:46.561 "nvme_admin": false, 00:32:46.561 "nvme_io": false 00:32:46.561 }, 00:32:46.561 "memory_domains": [ 00:32:46.561 { 00:32:46.561 "dma_device_id": "system", 00:32:46.561 "dma_device_type": 1 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.561 "dma_device_type": 2 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "system", 00:32:46.561 "dma_device_type": 1 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.561 "dma_device_type": 2 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "system", 00:32:46.561 "dma_device_type": 1 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.561 "dma_device_type": 2 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "system", 00:32:46.561 "dma_device_type": 1 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.561 "dma_device_type": 2 00:32:46.561 } 00:32:46.561 ], 00:32:46.561 "driver_specific": { 00:32:46.561 "raid": { 00:32:46.561 "uuid": "92353ad0-62f9-4023-a5b5-6fd9455f5bb5", 00:32:46.561 "strip_size_kb": 64, 00:32:46.561 "state": "online", 00:32:46.561 "raid_level": "concat", 00:32:46.561 "superblock": true, 00:32:46.561 "num_base_bdevs": 4, 00:32:46.561 "num_base_bdevs_discovered": 4, 00:32:46.561 "num_base_bdevs_operational": 4, 00:32:46.561 "base_bdevs_list": [ 00:32:46.561 { 00:32:46.561 "name": "pt1", 00:32:46.561 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:46.561 "is_configured": true, 00:32:46.561 "data_offset": 2048, 00:32:46.561 "data_size": 63488 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "name": "pt2", 00:32:46.561 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:46.561 "is_configured": true, 00:32:46.561 "data_offset": 2048, 00:32:46.561 "data_size": 63488 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "name": "pt3", 00:32:46.561 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:46.561 "is_configured": true, 00:32:46.561 "data_offset": 2048, 00:32:46.561 "data_size": 63488 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "name": "pt4", 00:32:46.561 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:46.561 "is_configured": true, 00:32:46.561 "data_offset": 2048, 00:32:46.561 "data_size": 63488 00:32:46.561 } 00:32:46.561 ] 00:32:46.561 } 00:32:46.561 } 00:32:46.561 }' 00:32:46.561 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:46.561 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:32:46.561 pt2 00:32:46.561 pt3 00:32:46.561 pt4' 00:32:46.561 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:46.561 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:46.561 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:46.820 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:46.820 "name": "pt1", 00:32:46.820 "aliases": [ 00:32:46.820 "474e44ac-a528-5229-acd0-9ae393f81268" 00:32:46.820 ], 00:32:46.820 "product_name": "passthru", 00:32:46.820 "block_size": 512, 00:32:46.820 "num_blocks": 65536, 00:32:46.820 "uuid": "474e44ac-a528-5229-acd0-9ae393f81268", 00:32:46.820 "assigned_rate_limits": { 00:32:46.820 "rw_ios_per_sec": 0, 00:32:46.820 "rw_mbytes_per_sec": 0, 00:32:46.820 "r_mbytes_per_sec": 0, 00:32:46.820 "w_mbytes_per_sec": 0 00:32:46.820 }, 00:32:46.820 "claimed": true, 00:32:46.820 "claim_type": "exclusive_write", 00:32:46.820 "zoned": false, 00:32:46.820 "supported_io_types": { 00:32:46.820 "read": true, 00:32:46.820 "write": true, 00:32:46.820 "unmap": true, 00:32:46.820 "write_zeroes": true, 00:32:46.820 "flush": true, 00:32:46.820 "reset": true, 00:32:46.820 "compare": false, 00:32:46.820 "compare_and_write": false, 00:32:46.820 "abort": true, 00:32:46.820 "nvme_admin": false, 00:32:46.820 "nvme_io": false 00:32:46.820 }, 00:32:46.820 "memory_domains": [ 00:32:46.820 { 00:32:46.820 "dma_device_id": "system", 00:32:46.820 "dma_device_type": 1 00:32:46.820 }, 00:32:46.820 { 00:32:46.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.820 "dma_device_type": 2 00:32:46.820 } 00:32:46.820 ], 00:32:46.820 "driver_specific": { 00:32:46.820 "passthru": { 00:32:46.820 "name": "pt1", 00:32:46.820 "base_bdev_name": "malloc1" 00:32:46.820 } 00:32:46.820 } 00:32:46.820 }' 00:32:46.820 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:46.820 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:47.078 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:47.337 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:47.337 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:47.337 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:47.337 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:47.337 11:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:47.596 "name": "pt2", 00:32:47.596 "aliases": [ 00:32:47.596 "b01007ed-5129-5893-bb82-b6c7a1cd822b" 00:32:47.596 ], 00:32:47.596 "product_name": "passthru", 00:32:47.596 "block_size": 512, 00:32:47.596 "num_blocks": 65536, 00:32:47.596 "uuid": "b01007ed-5129-5893-bb82-b6c7a1cd822b", 00:32:47.596 "assigned_rate_limits": { 00:32:47.596 "rw_ios_per_sec": 0, 00:32:47.596 "rw_mbytes_per_sec": 0, 00:32:47.596 "r_mbytes_per_sec": 0, 00:32:47.596 "w_mbytes_per_sec": 0 00:32:47.596 }, 00:32:47.596 "claimed": true, 00:32:47.596 "claim_type": "exclusive_write", 00:32:47.596 "zoned": false, 00:32:47.596 "supported_io_types": { 00:32:47.596 "read": true, 00:32:47.596 "write": true, 00:32:47.596 "unmap": true, 00:32:47.596 "write_zeroes": true, 00:32:47.596 "flush": true, 00:32:47.596 "reset": true, 00:32:47.596 "compare": false, 00:32:47.596 "compare_and_write": false, 00:32:47.596 "abort": true, 00:32:47.596 "nvme_admin": false, 00:32:47.596 "nvme_io": false 00:32:47.596 }, 00:32:47.596 "memory_domains": [ 00:32:47.596 { 00:32:47.596 "dma_device_id": "system", 00:32:47.596 "dma_device_type": 1 00:32:47.596 }, 00:32:47.596 { 00:32:47.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.596 "dma_device_type": 2 00:32:47.596 } 00:32:47.596 ], 00:32:47.596 "driver_specific": { 00:32:47.596 "passthru": { 00:32:47.596 "name": "pt2", 00:32:47.596 "base_bdev_name": "malloc2" 00:32:47.596 } 00:32:47.596 } 00:32:47.596 }' 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:47.596 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:47.855 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:48.113 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:48.113 "name": "pt3", 00:32:48.113 "aliases": [ 00:32:48.113 "4710670e-0739-5798-889e-dd49b4020526" 00:32:48.113 ], 00:32:48.113 "product_name": "passthru", 00:32:48.113 "block_size": 512, 00:32:48.113 "num_blocks": 65536, 00:32:48.113 "uuid": "4710670e-0739-5798-889e-dd49b4020526", 00:32:48.113 "assigned_rate_limits": { 00:32:48.113 "rw_ios_per_sec": 0, 00:32:48.113 "rw_mbytes_per_sec": 0, 00:32:48.113 "r_mbytes_per_sec": 0, 00:32:48.113 "w_mbytes_per_sec": 0 00:32:48.113 }, 00:32:48.113 "claimed": true, 00:32:48.113 "claim_type": "exclusive_write", 00:32:48.113 "zoned": false, 00:32:48.113 "supported_io_types": { 00:32:48.113 "read": true, 00:32:48.113 "write": true, 00:32:48.113 "unmap": true, 00:32:48.113 "write_zeroes": true, 00:32:48.113 "flush": true, 00:32:48.113 "reset": true, 00:32:48.113 "compare": false, 00:32:48.113 "compare_and_write": false, 00:32:48.113 "abort": true, 00:32:48.113 "nvme_admin": false, 00:32:48.113 "nvme_io": false 00:32:48.113 }, 00:32:48.113 "memory_domains": [ 00:32:48.113 { 00:32:48.113 "dma_device_id": "system", 00:32:48.113 "dma_device_type": 1 00:32:48.113 }, 00:32:48.113 { 00:32:48.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.113 "dma_device_type": 2 00:32:48.113 } 00:32:48.113 ], 00:32:48.113 "driver_specific": { 00:32:48.113 "passthru": { 00:32:48.113 "name": "pt3", 00:32:48.113 "base_bdev_name": "malloc3" 00:32:48.113 } 00:32:48.113 } 00:32:48.113 }' 00:32:48.113 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:48.113 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:48.375 11:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:48.375 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:32:48.650 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:32:48.911 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:32:48.911 "name": "pt4", 00:32:48.911 "aliases": [ 00:32:48.911 "2c3b0c3f-6e8f-5673-9a04-705449d9871d" 00:32:48.911 ], 00:32:48.911 "product_name": "passthru", 00:32:48.911 "block_size": 512, 00:32:48.911 "num_blocks": 65536, 00:32:48.911 "uuid": "2c3b0c3f-6e8f-5673-9a04-705449d9871d", 00:32:48.911 "assigned_rate_limits": { 00:32:48.911 "rw_ios_per_sec": 0, 00:32:48.911 "rw_mbytes_per_sec": 0, 00:32:48.911 "r_mbytes_per_sec": 0, 00:32:48.911 "w_mbytes_per_sec": 0 00:32:48.911 }, 00:32:48.911 "claimed": true, 00:32:48.911 "claim_type": "exclusive_write", 00:32:48.911 "zoned": false, 00:32:48.911 "supported_io_types": { 00:32:48.911 "read": true, 00:32:48.911 "write": true, 00:32:48.911 "unmap": true, 00:32:48.911 "write_zeroes": true, 00:32:48.911 "flush": true, 00:32:48.911 "reset": true, 00:32:48.911 "compare": false, 00:32:48.911 "compare_and_write": false, 00:32:48.911 "abort": true, 00:32:48.911 "nvme_admin": false, 00:32:48.911 "nvme_io": false 00:32:48.911 }, 00:32:48.911 "memory_domains": [ 00:32:48.911 { 00:32:48.911 "dma_device_id": "system", 00:32:48.911 "dma_device_type": 1 00:32:48.911 }, 00:32:48.911 { 00:32:48.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.911 "dma_device_type": 2 00:32:48.911 } 00:32:48.911 ], 00:32:48.911 "driver_specific": { 00:32:48.911 "passthru": { 00:32:48.911 "name": "pt4", 00:32:48.911 "base_bdev_name": "malloc4" 00:32:48.911 } 00:32:48.911 } 00:32:48.911 }' 00:32:48.911 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:48.911 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:32:48.911 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:32:48.911 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:49.171 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:32:49.430 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:32:49.430 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:49.430 11:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:49.689 [2024-05-15 11:26:08.127160] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92353ad0-62f9-4023-a5b5-6fd9455f5bb5 '!=' 92353ad0-62f9-4023-a5b5-6fd9455f5bb5 ']' 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 69496 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 69496 ']' 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 69496 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69496 00:32:49.689 killing process with pid 69496 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69496' 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 69496 00:32:49.689 11:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 69496 00:32:49.689 [2024-05-15 11:26:08.167542] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:49.689 [2024-05-15 11:26:08.167645] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.689 [2024-05-15 11:26:08.167696] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.689 [2024-05-15 11:26:08.167708] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:32:49.947 [2024-05-15 11:26:08.482613] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:51.325 11:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:32:51.325 00:32:51.325 real 0m17.808s 00:32:51.325 user 0m32.487s 00:32:51.325 sys 0m1.837s 00:32:51.325 11:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:51.325 11:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.325 ************************************ 00:32:51.325 END TEST raid_superblock_test 00:32:51.325 ************************************ 00:32:51.325 11:26:09 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:32:51.325 11:26:09 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:32:51.325 11:26:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:32:51.325 11:26:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.325 11:26:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:51.325 ************************************ 00:32:51.325 START TEST raid_state_function_test 00:32:51.325 ************************************ 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:32:51.325 Process raid pid: 70049 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=70049 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 70049' 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 70049 /var/tmp/spdk-raid.sock 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 70049 ']' 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:51.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:51.325 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.325 [2024-05-15 11:26:09.916011] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:32:51.325 [2024-05-15 11:26:09.916242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.584 [2024-05-15 11:26:10.087672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.842 [2024-05-15 11:26:10.348981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.100 [2024-05-15 11:26:10.587977] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:52.359 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:52.359 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:32:52.359 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:52.618 [2024-05-15 11:26:11.026484] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:52.618 [2024-05-15 11:26:11.026576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:52.618 [2024-05-15 11:26:11.026594] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:52.618 [2024-05-15 11:26:11.026615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:52.618 [2024-05-15 11:26:11.026624] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:52.618 [2024-05-15 11:26:11.026671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:52.618 [2024-05-15 11:26:11.026682] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:52.618 [2024-05-15 11:26:11.026706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.618 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:52.876 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:52.876 "name": "Existed_Raid", 00:32:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.876 "strip_size_kb": 0, 00:32:52.876 "state": "configuring", 00:32:52.876 "raid_level": "raid1", 00:32:52.876 "superblock": false, 00:32:52.876 "num_base_bdevs": 4, 00:32:52.876 "num_base_bdevs_discovered": 0, 00:32:52.876 "num_base_bdevs_operational": 4, 00:32:52.876 "base_bdevs_list": [ 00:32:52.876 { 00:32:52.876 "name": "BaseBdev1", 00:32:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.876 "is_configured": false, 00:32:52.876 "data_offset": 0, 00:32:52.876 "data_size": 0 00:32:52.876 }, 00:32:52.876 { 00:32:52.876 "name": "BaseBdev2", 00:32:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.876 "is_configured": false, 00:32:52.876 "data_offset": 0, 00:32:52.876 "data_size": 0 00:32:52.876 }, 00:32:52.876 { 00:32:52.876 "name": "BaseBdev3", 00:32:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.876 "is_configured": false, 00:32:52.876 "data_offset": 0, 00:32:52.876 "data_size": 0 00:32:52.876 }, 00:32:52.876 { 00:32:52.876 "name": "BaseBdev4", 00:32:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.876 "is_configured": false, 00:32:52.876 "data_offset": 0, 00:32:52.876 "data_size": 0 00:32:52.876 } 00:32:52.876 ] 00:32:52.876 }' 00:32:52.876 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:52.876 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.442 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:53.701 [2024-05-15 11:26:12.246559] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:53.701 [2024-05-15 11:26:12.246614] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:32:53.701 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:53.960 [2024-05-15 11:26:12.490596] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:53.960 [2024-05-15 11:26:12.490689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:53.960 [2024-05-15 11:26:12.490708] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:53.960 [2024-05-15 11:26:12.490740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:53.960 [2024-05-15 11:26:12.490750] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:53.960 [2024-05-15 11:26:12.490770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:53.960 [2024-05-15 11:26:12.490778] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:53.960 [2024-05-15 11:26:12.491000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:53.960 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:54.218 [2024-05-15 11:26:12.777711] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:54.218 BaseBdev1 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:54.218 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:54.477 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:54.734 [ 00:32:54.734 { 00:32:54.734 "name": "BaseBdev1", 00:32:54.734 "aliases": [ 00:32:54.734 "9133fd73-f9aa-4390-b6f8-f06e335b0e74" 00:32:54.734 ], 00:32:54.734 "product_name": "Malloc disk", 00:32:54.734 "block_size": 512, 00:32:54.734 "num_blocks": 65536, 00:32:54.734 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:32:54.734 "assigned_rate_limits": { 00:32:54.734 "rw_ios_per_sec": 0, 00:32:54.734 "rw_mbytes_per_sec": 0, 00:32:54.734 "r_mbytes_per_sec": 0, 00:32:54.734 "w_mbytes_per_sec": 0 00:32:54.735 }, 00:32:54.735 "claimed": true, 00:32:54.735 "claim_type": "exclusive_write", 00:32:54.735 "zoned": false, 00:32:54.735 "supported_io_types": { 00:32:54.735 "read": true, 00:32:54.735 "write": true, 00:32:54.735 "unmap": true, 00:32:54.735 "write_zeroes": true, 00:32:54.735 "flush": true, 00:32:54.735 "reset": true, 00:32:54.735 "compare": false, 00:32:54.735 "compare_and_write": false, 00:32:54.735 "abort": true, 00:32:54.735 "nvme_admin": false, 00:32:54.735 "nvme_io": false 00:32:54.735 }, 00:32:54.735 "memory_domains": [ 00:32:54.735 { 00:32:54.735 "dma_device_id": "system", 00:32:54.735 "dma_device_type": 1 00:32:54.735 }, 00:32:54.735 { 00:32:54.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.735 "dma_device_type": 2 00:32:54.735 } 00:32:54.735 ], 00:32:54.735 "driver_specific": {} 00:32:54.735 } 00:32:54.735 ] 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.735 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.993 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:54.993 "name": "Existed_Raid", 00:32:54.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.993 "strip_size_kb": 0, 00:32:54.993 "state": "configuring", 00:32:54.993 "raid_level": "raid1", 00:32:54.993 "superblock": false, 00:32:54.993 "num_base_bdevs": 4, 00:32:54.993 "num_base_bdevs_discovered": 1, 00:32:54.993 "num_base_bdevs_operational": 4, 00:32:54.993 "base_bdevs_list": [ 00:32:54.993 { 00:32:54.993 "name": "BaseBdev1", 00:32:54.993 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:32:54.993 "is_configured": true, 00:32:54.993 "data_offset": 0, 00:32:54.993 "data_size": 65536 00:32:54.993 }, 00:32:54.993 { 00:32:54.993 "name": "BaseBdev2", 00:32:54.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.993 "is_configured": false, 00:32:54.993 "data_offset": 0, 00:32:54.993 "data_size": 0 00:32:54.993 }, 00:32:54.993 { 00:32:54.993 "name": "BaseBdev3", 00:32:54.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.993 "is_configured": false, 00:32:54.993 "data_offset": 0, 00:32:54.993 "data_size": 0 00:32:54.993 }, 00:32:54.993 { 00:32:54.993 "name": "BaseBdev4", 00:32:54.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.993 "is_configured": false, 00:32:54.993 "data_offset": 0, 00:32:54.993 "data_size": 0 00:32:54.993 } 00:32:54.993 ] 00:32:54.993 }' 00:32:54.993 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:54.993 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.931 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:55.931 [2024-05-15 11:26:14.546038] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:55.931 [2024-05-15 11:26:14.546098] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:32:55.931 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:56.191 [2024-05-15 11:26:14.762100] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:56.191 [2024-05-15 11:26:14.763792] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:56.191 [2024-05-15 11:26:14.763878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:56.191 [2024-05-15 11:26:14.763918] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:56.191 [2024-05-15 11:26:14.763963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:56.191 [2024-05-15 11:26:14.763972] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:56.191 [2024-05-15 11:26:14.763989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.191 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.451 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:56.451 "name": "Existed_Raid", 00:32:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.451 "strip_size_kb": 0, 00:32:56.451 "state": "configuring", 00:32:56.451 "raid_level": "raid1", 00:32:56.451 "superblock": false, 00:32:56.451 "num_base_bdevs": 4, 00:32:56.451 "num_base_bdevs_discovered": 1, 00:32:56.451 "num_base_bdevs_operational": 4, 00:32:56.451 "base_bdevs_list": [ 00:32:56.451 { 00:32:56.451 "name": "BaseBdev1", 00:32:56.451 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:32:56.451 "is_configured": true, 00:32:56.451 "data_offset": 0, 00:32:56.451 "data_size": 65536 00:32:56.451 }, 00:32:56.451 { 00:32:56.451 "name": "BaseBdev2", 00:32:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.451 "is_configured": false, 00:32:56.451 "data_offset": 0, 00:32:56.451 "data_size": 0 00:32:56.451 }, 00:32:56.451 { 00:32:56.451 "name": "BaseBdev3", 00:32:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.451 "is_configured": false, 00:32:56.451 "data_offset": 0, 00:32:56.451 "data_size": 0 00:32:56.451 }, 00:32:56.451 { 00:32:56.451 "name": "BaseBdev4", 00:32:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.451 "is_configured": false, 00:32:56.451 "data_offset": 0, 00:32:56.451 "data_size": 0 00:32:56.451 } 00:32:56.451 ] 00:32:56.451 }' 00:32:56.451 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:56.451 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:57.386 [2024-05-15 11:26:15.963845] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:57.386 BaseBdev2 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:57.386 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:57.646 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:57.905 [ 00:32:57.906 { 00:32:57.906 "name": "BaseBdev2", 00:32:57.906 "aliases": [ 00:32:57.906 "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d" 00:32:57.906 ], 00:32:57.906 "product_name": "Malloc disk", 00:32:57.906 "block_size": 512, 00:32:57.906 "num_blocks": 65536, 00:32:57.906 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:32:57.906 "assigned_rate_limits": { 00:32:57.906 "rw_ios_per_sec": 0, 00:32:57.906 "rw_mbytes_per_sec": 0, 00:32:57.906 "r_mbytes_per_sec": 0, 00:32:57.906 "w_mbytes_per_sec": 0 00:32:57.906 }, 00:32:57.906 "claimed": true, 00:32:57.906 "claim_type": "exclusive_write", 00:32:57.906 "zoned": false, 00:32:57.906 "supported_io_types": { 00:32:57.906 "read": true, 00:32:57.906 "write": true, 00:32:57.906 "unmap": true, 00:32:57.906 "write_zeroes": true, 00:32:57.906 "flush": true, 00:32:57.906 "reset": true, 00:32:57.906 "compare": false, 00:32:57.906 "compare_and_write": false, 00:32:57.906 "abort": true, 00:32:57.906 "nvme_admin": false, 00:32:57.906 "nvme_io": false 00:32:57.906 }, 00:32:57.906 "memory_domains": [ 00:32:57.906 { 00:32:57.906 "dma_device_id": "system", 00:32:57.906 "dma_device_type": 1 00:32:57.906 }, 00:32:57.906 { 00:32:57.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.906 "dma_device_type": 2 00:32:57.906 } 00:32:57.906 ], 00:32:57.906 "driver_specific": {} 00:32:57.906 } 00:32:57.906 ] 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.906 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.165 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:58.165 "name": "Existed_Raid", 00:32:58.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.165 "strip_size_kb": 0, 00:32:58.165 "state": "configuring", 00:32:58.165 "raid_level": "raid1", 00:32:58.165 "superblock": false, 00:32:58.165 "num_base_bdevs": 4, 00:32:58.165 "num_base_bdevs_discovered": 2, 00:32:58.165 "num_base_bdevs_operational": 4, 00:32:58.165 "base_bdevs_list": [ 00:32:58.165 { 00:32:58.165 "name": "BaseBdev1", 00:32:58.165 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:32:58.165 "is_configured": true, 00:32:58.165 "data_offset": 0, 00:32:58.165 "data_size": 65536 00:32:58.165 }, 00:32:58.165 { 00:32:58.165 "name": "BaseBdev2", 00:32:58.165 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:32:58.165 "is_configured": true, 00:32:58.165 "data_offset": 0, 00:32:58.165 "data_size": 65536 00:32:58.165 }, 00:32:58.165 { 00:32:58.165 "name": "BaseBdev3", 00:32:58.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.165 "is_configured": false, 00:32:58.165 "data_offset": 0, 00:32:58.165 "data_size": 0 00:32:58.165 }, 00:32:58.165 { 00:32:58.165 "name": "BaseBdev4", 00:32:58.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.165 "is_configured": false, 00:32:58.165 "data_offset": 0, 00:32:58.165 "data_size": 0 00:32:58.165 } 00:32:58.165 ] 00:32:58.165 }' 00:32:58.165 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:58.165 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:59.101 BaseBdev3 00:32:59.101 [2024-05-15 11:26:17.693035] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:59.101 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:59.360 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:59.688 [ 00:32:59.688 { 00:32:59.688 "name": "BaseBdev3", 00:32:59.688 "aliases": [ 00:32:59.688 "1b4636d1-2752-49b1-aa95-07a053520304" 00:32:59.688 ], 00:32:59.688 "product_name": "Malloc disk", 00:32:59.688 "block_size": 512, 00:32:59.688 "num_blocks": 65536, 00:32:59.688 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:32:59.688 "assigned_rate_limits": { 00:32:59.688 "rw_ios_per_sec": 0, 00:32:59.688 "rw_mbytes_per_sec": 0, 00:32:59.688 "r_mbytes_per_sec": 0, 00:32:59.688 "w_mbytes_per_sec": 0 00:32:59.688 }, 00:32:59.688 "claimed": true, 00:32:59.688 "claim_type": "exclusive_write", 00:32:59.688 "zoned": false, 00:32:59.688 "supported_io_types": { 00:32:59.688 "read": true, 00:32:59.688 "write": true, 00:32:59.688 "unmap": true, 00:32:59.688 "write_zeroes": true, 00:32:59.688 "flush": true, 00:32:59.688 "reset": true, 00:32:59.688 "compare": false, 00:32:59.688 "compare_and_write": false, 00:32:59.688 "abort": true, 00:32:59.688 "nvme_admin": false, 00:32:59.688 "nvme_io": false 00:32:59.688 }, 00:32:59.688 "memory_domains": [ 00:32:59.688 { 00:32:59.688 "dma_device_id": "system", 00:32:59.688 "dma_device_type": 1 00:32:59.688 }, 00:32:59.688 { 00:32:59.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.688 "dma_device_type": 2 00:32:59.688 } 00:32:59.688 ], 00:32:59.688 "driver_specific": {} 00:32:59.688 } 00:32:59.688 ] 00:32:59.688 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:59.688 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:32:59.688 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.689 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.948 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:59.948 "name": "Existed_Raid", 00:32:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.948 "strip_size_kb": 0, 00:32:59.948 "state": "configuring", 00:32:59.948 "raid_level": "raid1", 00:32:59.948 "superblock": false, 00:32:59.948 "num_base_bdevs": 4, 00:32:59.948 "num_base_bdevs_discovered": 3, 00:32:59.948 "num_base_bdevs_operational": 4, 00:32:59.948 "base_bdevs_list": [ 00:32:59.948 { 00:32:59.948 "name": "BaseBdev1", 00:32:59.948 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:32:59.948 "is_configured": true, 00:32:59.948 "data_offset": 0, 00:32:59.948 "data_size": 65536 00:32:59.948 }, 00:32:59.948 { 00:32:59.948 "name": "BaseBdev2", 00:32:59.948 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:32:59.948 "is_configured": true, 00:32:59.948 "data_offset": 0, 00:32:59.948 "data_size": 65536 00:32:59.948 }, 00:32:59.948 { 00:32:59.948 "name": "BaseBdev3", 00:32:59.948 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:32:59.948 "is_configured": true, 00:32:59.948 "data_offset": 0, 00:32:59.948 "data_size": 65536 00:32:59.948 }, 00:32:59.948 { 00:32:59.948 "name": "BaseBdev4", 00:32:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.948 "is_configured": false, 00:32:59.948 "data_offset": 0, 00:32:59.948 "data_size": 0 00:32:59.948 } 00:32:59.948 ] 00:32:59.948 }' 00:32:59.948 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:59.948 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.514 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:01.078 [2024-05-15 11:26:19.410081] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:01.078 [2024-05-15 11:26:19.410168] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:33:01.078 [2024-05-15 11:26:19.410193] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:01.078 [2024-05-15 11:26:19.410336] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:33:01.078 [2024-05-15 11:26:19.410606] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:33:01.078 [2024-05-15 11:26:19.410621] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:33:01.078 BaseBdev4 00:33:01.078 [2024-05-15 11:26:19.411065] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:01.078 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:01.335 [ 00:33:01.335 { 00:33:01.335 "name": "BaseBdev4", 00:33:01.335 "aliases": [ 00:33:01.335 "bc470aba-9aac-476b-997d-3d9a22c5ebd3" 00:33:01.335 ], 00:33:01.335 "product_name": "Malloc disk", 00:33:01.335 "block_size": 512, 00:33:01.335 "num_blocks": 65536, 00:33:01.335 "uuid": "bc470aba-9aac-476b-997d-3d9a22c5ebd3", 00:33:01.335 "assigned_rate_limits": { 00:33:01.335 "rw_ios_per_sec": 0, 00:33:01.335 "rw_mbytes_per_sec": 0, 00:33:01.335 "r_mbytes_per_sec": 0, 00:33:01.335 "w_mbytes_per_sec": 0 00:33:01.335 }, 00:33:01.335 "claimed": true, 00:33:01.335 "claim_type": "exclusive_write", 00:33:01.335 "zoned": false, 00:33:01.335 "supported_io_types": { 00:33:01.335 "read": true, 00:33:01.335 "write": true, 00:33:01.335 "unmap": true, 00:33:01.335 "write_zeroes": true, 00:33:01.335 "flush": true, 00:33:01.335 "reset": true, 00:33:01.335 "compare": false, 00:33:01.335 "compare_and_write": false, 00:33:01.335 "abort": true, 00:33:01.335 "nvme_admin": false, 00:33:01.335 "nvme_io": false 00:33:01.335 }, 00:33:01.335 "memory_domains": [ 00:33:01.335 { 00:33:01.335 "dma_device_id": "system", 00:33:01.335 "dma_device_type": 1 00:33:01.335 }, 00:33:01.335 { 00:33:01.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.335 "dma_device_type": 2 00:33:01.335 } 00:33:01.335 ], 00:33:01.335 "driver_specific": {} 00:33:01.335 } 00:33:01.335 ] 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.335 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.594 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:01.594 "name": "Existed_Raid", 00:33:01.594 "uuid": "f17612a2-780f-4a75-b831-dc0badb374bf", 00:33:01.594 "strip_size_kb": 0, 00:33:01.594 "state": "online", 00:33:01.594 "raid_level": "raid1", 00:33:01.594 "superblock": false, 00:33:01.594 "num_base_bdevs": 4, 00:33:01.594 "num_base_bdevs_discovered": 4, 00:33:01.594 "num_base_bdevs_operational": 4, 00:33:01.594 "base_bdevs_list": [ 00:33:01.594 { 00:33:01.594 "name": "BaseBdev1", 00:33:01.594 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:33:01.594 "is_configured": true, 00:33:01.594 "data_offset": 0, 00:33:01.594 "data_size": 65536 00:33:01.594 }, 00:33:01.594 { 00:33:01.594 "name": "BaseBdev2", 00:33:01.594 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:33:01.594 "is_configured": true, 00:33:01.594 "data_offset": 0, 00:33:01.594 "data_size": 65536 00:33:01.594 }, 00:33:01.594 { 00:33:01.594 "name": "BaseBdev3", 00:33:01.594 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:33:01.594 "is_configured": true, 00:33:01.594 "data_offset": 0, 00:33:01.594 "data_size": 65536 00:33:01.594 }, 00:33:01.594 { 00:33:01.594 "name": "BaseBdev4", 00:33:01.594 "uuid": "bc470aba-9aac-476b-997d-3d9a22c5ebd3", 00:33:01.594 "is_configured": true, 00:33:01.594 "data_offset": 0, 00:33:01.594 "data_size": 65536 00:33:01.594 } 00:33:01.594 ] 00:33:01.594 }' 00:33:01.594 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:01.594 11:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.529 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:33:02.529 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:33:02.529 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:33:02.529 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:33:02.530 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:33:02.530 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:33:02.530 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:02.530 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:33:02.530 [2024-05-15 11:26:21.062681] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:33:02.530 "name": "Existed_Raid", 00:33:02.530 "aliases": [ 00:33:02.530 "f17612a2-780f-4a75-b831-dc0badb374bf" 00:33:02.530 ], 00:33:02.530 "product_name": "Raid Volume", 00:33:02.530 "block_size": 512, 00:33:02.530 "num_blocks": 65536, 00:33:02.530 "uuid": "f17612a2-780f-4a75-b831-dc0badb374bf", 00:33:02.530 "assigned_rate_limits": { 00:33:02.530 "rw_ios_per_sec": 0, 00:33:02.530 "rw_mbytes_per_sec": 0, 00:33:02.530 "r_mbytes_per_sec": 0, 00:33:02.530 "w_mbytes_per_sec": 0 00:33:02.530 }, 00:33:02.530 "claimed": false, 00:33:02.530 "zoned": false, 00:33:02.530 "supported_io_types": { 00:33:02.530 "read": true, 00:33:02.530 "write": true, 00:33:02.530 "unmap": false, 00:33:02.530 "write_zeroes": true, 00:33:02.530 "flush": false, 00:33:02.530 "reset": true, 00:33:02.530 "compare": false, 00:33:02.530 "compare_and_write": false, 00:33:02.530 "abort": false, 00:33:02.530 "nvme_admin": false, 00:33:02.530 "nvme_io": false 00:33:02.530 }, 00:33:02.530 "memory_domains": [ 00:33:02.530 { 00:33:02.530 "dma_device_id": "system", 00:33:02.530 "dma_device_type": 1 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.530 "dma_device_type": 2 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "system", 00:33:02.530 "dma_device_type": 1 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.530 "dma_device_type": 2 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "system", 00:33:02.530 "dma_device_type": 1 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.530 "dma_device_type": 2 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "system", 00:33:02.530 "dma_device_type": 1 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.530 "dma_device_type": 2 00:33:02.530 } 00:33:02.530 ], 00:33:02.530 "driver_specific": { 00:33:02.530 "raid": { 00:33:02.530 "uuid": "f17612a2-780f-4a75-b831-dc0badb374bf", 00:33:02.530 "strip_size_kb": 0, 00:33:02.530 "state": "online", 00:33:02.530 "raid_level": "raid1", 00:33:02.530 "superblock": false, 00:33:02.530 "num_base_bdevs": 4, 00:33:02.530 "num_base_bdevs_discovered": 4, 00:33:02.530 "num_base_bdevs_operational": 4, 00:33:02.530 "base_bdevs_list": [ 00:33:02.530 { 00:33:02.530 "name": "BaseBdev1", 00:33:02.530 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:33:02.530 "is_configured": true, 00:33:02.530 "data_offset": 0, 00:33:02.530 "data_size": 65536 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "name": "BaseBdev2", 00:33:02.530 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:33:02.530 "is_configured": true, 00:33:02.530 "data_offset": 0, 00:33:02.530 "data_size": 65536 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "name": "BaseBdev3", 00:33:02.530 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:33:02.530 "is_configured": true, 00:33:02.530 "data_offset": 0, 00:33:02.530 "data_size": 65536 00:33:02.530 }, 00:33:02.530 { 00:33:02.530 "name": "BaseBdev4", 00:33:02.530 "uuid": "bc470aba-9aac-476b-997d-3d9a22c5ebd3", 00:33:02.530 "is_configured": true, 00:33:02.530 "data_offset": 0, 00:33:02.530 "data_size": 65536 00:33:02.530 } 00:33:02.530 ] 00:33:02.530 } 00:33:02.530 } 00:33:02.530 }' 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:33:02.530 BaseBdev2 00:33:02.530 BaseBdev3 00:33:02.530 BaseBdev4' 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:02.530 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:02.788 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:02.788 "name": "BaseBdev1", 00:33:02.788 "aliases": [ 00:33:02.788 "9133fd73-f9aa-4390-b6f8-f06e335b0e74" 00:33:02.788 ], 00:33:02.788 "product_name": "Malloc disk", 00:33:02.788 "block_size": 512, 00:33:02.788 "num_blocks": 65536, 00:33:02.788 "uuid": "9133fd73-f9aa-4390-b6f8-f06e335b0e74", 00:33:02.788 "assigned_rate_limits": { 00:33:02.788 "rw_ios_per_sec": 0, 00:33:02.788 "rw_mbytes_per_sec": 0, 00:33:02.788 "r_mbytes_per_sec": 0, 00:33:02.788 "w_mbytes_per_sec": 0 00:33:02.788 }, 00:33:02.788 "claimed": true, 00:33:02.788 "claim_type": "exclusive_write", 00:33:02.788 "zoned": false, 00:33:02.788 "supported_io_types": { 00:33:02.788 "read": true, 00:33:02.788 "write": true, 00:33:02.788 "unmap": true, 00:33:02.788 "write_zeroes": true, 00:33:02.788 "flush": true, 00:33:02.788 "reset": true, 00:33:02.788 "compare": false, 00:33:02.788 "compare_and_write": false, 00:33:02.788 "abort": true, 00:33:02.788 "nvme_admin": false, 00:33:02.788 "nvme_io": false 00:33:02.788 }, 00:33:02.788 "memory_domains": [ 00:33:02.788 { 00:33:02.788 "dma_device_id": "system", 00:33:02.788 "dma_device_type": 1 00:33:02.788 }, 00:33:02.788 { 00:33:02.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.788 "dma_device_type": 2 00:33:02.788 } 00:33:02.788 ], 00:33:02.788 "driver_specific": {} 00:33:02.788 }' 00:33:02.788 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:02.788 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:03.070 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:03.349 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:03.608 "name": "BaseBdev2", 00:33:03.608 "aliases": [ 00:33:03.608 "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d" 00:33:03.608 ], 00:33:03.608 "product_name": "Malloc disk", 00:33:03.608 "block_size": 512, 00:33:03.608 "num_blocks": 65536, 00:33:03.608 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:33:03.608 "assigned_rate_limits": { 00:33:03.608 "rw_ios_per_sec": 0, 00:33:03.608 "rw_mbytes_per_sec": 0, 00:33:03.608 "r_mbytes_per_sec": 0, 00:33:03.608 "w_mbytes_per_sec": 0 00:33:03.608 }, 00:33:03.608 "claimed": true, 00:33:03.608 "claim_type": "exclusive_write", 00:33:03.608 "zoned": false, 00:33:03.608 "supported_io_types": { 00:33:03.608 "read": true, 00:33:03.608 "write": true, 00:33:03.608 "unmap": true, 00:33:03.608 "write_zeroes": true, 00:33:03.608 "flush": true, 00:33:03.608 "reset": true, 00:33:03.608 "compare": false, 00:33:03.608 "compare_and_write": false, 00:33:03.608 "abort": true, 00:33:03.608 "nvme_admin": false, 00:33:03.608 "nvme_io": false 00:33:03.608 }, 00:33:03.608 "memory_domains": [ 00:33:03.608 { 00:33:03.608 "dma_device_id": "system", 00:33:03.608 "dma_device_type": 1 00:33:03.608 }, 00:33:03.608 { 00:33:03.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.608 "dma_device_type": 2 00:33:03.608 } 00:33:03.608 ], 00:33:03.608 "driver_specific": {} 00:33:03.608 }' 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:03.608 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:03.866 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:04.124 "name": "BaseBdev3", 00:33:04.124 "aliases": [ 00:33:04.124 "1b4636d1-2752-49b1-aa95-07a053520304" 00:33:04.124 ], 00:33:04.124 "product_name": "Malloc disk", 00:33:04.124 "block_size": 512, 00:33:04.124 "num_blocks": 65536, 00:33:04.124 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:33:04.124 "assigned_rate_limits": { 00:33:04.124 "rw_ios_per_sec": 0, 00:33:04.124 "rw_mbytes_per_sec": 0, 00:33:04.124 "r_mbytes_per_sec": 0, 00:33:04.124 "w_mbytes_per_sec": 0 00:33:04.124 }, 00:33:04.124 "claimed": true, 00:33:04.124 "claim_type": "exclusive_write", 00:33:04.124 "zoned": false, 00:33:04.124 "supported_io_types": { 00:33:04.124 "read": true, 00:33:04.124 "write": true, 00:33:04.124 "unmap": true, 00:33:04.124 "write_zeroes": true, 00:33:04.124 "flush": true, 00:33:04.124 "reset": true, 00:33:04.124 "compare": false, 00:33:04.124 "compare_and_write": false, 00:33:04.124 "abort": true, 00:33:04.124 "nvme_admin": false, 00:33:04.124 "nvme_io": false 00:33:04.124 }, 00:33:04.124 "memory_domains": [ 00:33:04.124 { 00:33:04.124 "dma_device_id": "system", 00:33:04.124 "dma_device_type": 1 00:33:04.124 }, 00:33:04.124 { 00:33:04.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.124 "dma_device_type": 2 00:33:04.124 } 00:33:04.124 ], 00:33:04.124 "driver_specific": {} 00:33:04.124 }' 00:33:04.124 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:04.382 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:04.642 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:04.902 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:04.902 "name": "BaseBdev4", 00:33:04.902 "aliases": [ 00:33:04.902 "bc470aba-9aac-476b-997d-3d9a22c5ebd3" 00:33:04.902 ], 00:33:04.902 "product_name": "Malloc disk", 00:33:04.902 "block_size": 512, 00:33:04.902 "num_blocks": 65536, 00:33:04.902 "uuid": "bc470aba-9aac-476b-997d-3d9a22c5ebd3", 00:33:04.902 "assigned_rate_limits": { 00:33:04.902 "rw_ios_per_sec": 0, 00:33:04.902 "rw_mbytes_per_sec": 0, 00:33:04.902 "r_mbytes_per_sec": 0, 00:33:04.902 "w_mbytes_per_sec": 0 00:33:04.902 }, 00:33:04.902 "claimed": true, 00:33:04.902 "claim_type": "exclusive_write", 00:33:04.902 "zoned": false, 00:33:04.902 "supported_io_types": { 00:33:04.902 "read": true, 00:33:04.902 "write": true, 00:33:04.902 "unmap": true, 00:33:04.902 "write_zeroes": true, 00:33:04.902 "flush": true, 00:33:04.902 "reset": true, 00:33:04.902 "compare": false, 00:33:04.903 "compare_and_write": false, 00:33:04.903 "abort": true, 00:33:04.903 "nvme_admin": false, 00:33:04.903 "nvme_io": false 00:33:04.903 }, 00:33:04.903 "memory_domains": [ 00:33:04.903 { 00:33:04.903 "dma_device_id": "system", 00:33:04.903 "dma_device_type": 1 00:33:04.903 }, 00:33:04.903 { 00:33:04.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.903 "dma_device_type": 2 00:33:04.903 } 00:33:04.903 ], 00:33:04.903 "driver_specific": {} 00:33:04.903 }' 00:33:04.903 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:04.903 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:04.903 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:04.903 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:04.903 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:05.162 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:05.421 [2024-05-15 11:26:23.967296] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:05.680 "name": "Existed_Raid", 00:33:05.680 "uuid": "f17612a2-780f-4a75-b831-dc0badb374bf", 00:33:05.680 "strip_size_kb": 0, 00:33:05.680 "state": "online", 00:33:05.680 "raid_level": "raid1", 00:33:05.680 "superblock": false, 00:33:05.680 "num_base_bdevs": 4, 00:33:05.680 "num_base_bdevs_discovered": 3, 00:33:05.680 "num_base_bdevs_operational": 3, 00:33:05.680 "base_bdevs_list": [ 00:33:05.680 { 00:33:05.680 "name": null, 00:33:05.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.680 "is_configured": false, 00:33:05.680 "data_offset": 0, 00:33:05.680 "data_size": 65536 00:33:05.680 }, 00:33:05.680 { 00:33:05.680 "name": "BaseBdev2", 00:33:05.680 "uuid": "96c8b6cc-2fd4-4e8d-b7b8-cbf6f578677d", 00:33:05.680 "is_configured": true, 00:33:05.680 "data_offset": 0, 00:33:05.680 "data_size": 65536 00:33:05.680 }, 00:33:05.680 { 00:33:05.680 "name": "BaseBdev3", 00:33:05.680 "uuid": "1b4636d1-2752-49b1-aa95-07a053520304", 00:33:05.680 "is_configured": true, 00:33:05.680 "data_offset": 0, 00:33:05.680 "data_size": 65536 00:33:05.680 }, 00:33:05.680 { 00:33:05.680 "name": "BaseBdev4", 00:33:05.680 "uuid": "bc470aba-9aac-476b-997d-3d9a22c5ebd3", 00:33:05.680 "is_configured": true, 00:33:05.680 "data_offset": 0, 00:33:05.680 "data_size": 65536 00:33:05.680 } 00:33:05.680 ] 00:33:05.680 }' 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:05.680 11:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.615 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:06.615 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:06.615 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.615 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:06.615 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:06.615 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:06.615 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:06.874 [2024-05-15 11:26:25.441263] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:07.133 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:07.133 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:07.133 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.133 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:07.392 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:07.392 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:07.392 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:07.392 [2024-05-15 11:26:25.991477] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:07.651 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:07.652 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:07.911 [2024-05-15 11:26:26.459715] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:07.911 [2024-05-15 11:26:26.460015] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:07.911 [2024-05-15 11:26:26.540318] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.911 [2024-05-15 11:26:26.540488] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.911 [2024-05-15 11:26:26.540504] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:08.170 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:08.429 BaseBdev2 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:08.429 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:08.687 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:08.945 [ 00:33:08.945 { 00:33:08.945 "name": "BaseBdev2", 00:33:08.945 "aliases": [ 00:33:08.945 "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d" 00:33:08.945 ], 00:33:08.945 "product_name": "Malloc disk", 00:33:08.945 "block_size": 512, 00:33:08.945 "num_blocks": 65536, 00:33:08.945 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:08.945 "assigned_rate_limits": { 00:33:08.945 "rw_ios_per_sec": 0, 00:33:08.945 "rw_mbytes_per_sec": 0, 00:33:08.945 "r_mbytes_per_sec": 0, 00:33:08.945 "w_mbytes_per_sec": 0 00:33:08.945 }, 00:33:08.945 "claimed": false, 00:33:08.945 "zoned": false, 00:33:08.945 "supported_io_types": { 00:33:08.945 "read": true, 00:33:08.945 "write": true, 00:33:08.945 "unmap": true, 00:33:08.945 "write_zeroes": true, 00:33:08.945 "flush": true, 00:33:08.945 "reset": true, 00:33:08.945 "compare": false, 00:33:08.945 "compare_and_write": false, 00:33:08.945 "abort": true, 00:33:08.945 "nvme_admin": false, 00:33:08.945 "nvme_io": false 00:33:08.945 }, 00:33:08.945 "memory_domains": [ 00:33:08.945 { 00:33:08.945 "dma_device_id": "system", 00:33:08.945 "dma_device_type": 1 00:33:08.945 }, 00:33:08.945 { 00:33:08.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.945 "dma_device_type": 2 00:33:08.945 } 00:33:08.945 ], 00:33:08.945 "driver_specific": {} 00:33:08.945 } 00:33:08.945 ] 00:33:08.945 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:08.945 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:08.945 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:08.945 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:09.204 BaseBdev3 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:09.204 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:09.462 11:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:09.783 [ 00:33:09.783 { 00:33:09.783 "name": "BaseBdev3", 00:33:09.783 "aliases": [ 00:33:09.783 "e2834f3c-129f-48b2-834e-eb8ec6951501" 00:33:09.783 ], 00:33:09.783 "product_name": "Malloc disk", 00:33:09.783 "block_size": 512, 00:33:09.783 "num_blocks": 65536, 00:33:09.783 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:09.783 "assigned_rate_limits": { 00:33:09.783 "rw_ios_per_sec": 0, 00:33:09.783 "rw_mbytes_per_sec": 0, 00:33:09.783 "r_mbytes_per_sec": 0, 00:33:09.783 "w_mbytes_per_sec": 0 00:33:09.783 }, 00:33:09.783 "claimed": false, 00:33:09.783 "zoned": false, 00:33:09.783 "supported_io_types": { 00:33:09.783 "read": true, 00:33:09.783 "write": true, 00:33:09.783 "unmap": true, 00:33:09.783 "write_zeroes": true, 00:33:09.783 "flush": true, 00:33:09.783 "reset": true, 00:33:09.783 "compare": false, 00:33:09.783 "compare_and_write": false, 00:33:09.783 "abort": true, 00:33:09.783 "nvme_admin": false, 00:33:09.783 "nvme_io": false 00:33:09.783 }, 00:33:09.783 "memory_domains": [ 00:33:09.783 { 00:33:09.783 "dma_device_id": "system", 00:33:09.783 "dma_device_type": 1 00:33:09.783 }, 00:33:09.783 { 00:33:09.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.783 "dma_device_type": 2 00:33:09.783 } 00:33:09.783 ], 00:33:09.783 "driver_specific": {} 00:33:09.783 } 00:33:09.783 ] 00:33:09.783 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:09.783 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:09.783 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:09.783 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:10.041 BaseBdev4 00:33:10.041 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:33:10.041 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:33:10.041 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:10.042 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:10.042 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:10.042 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:10.042 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:10.300 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:10.300 [ 00:33:10.300 { 00:33:10.300 "name": "BaseBdev4", 00:33:10.300 "aliases": [ 00:33:10.300 "f3c2b343-804f-4e09-a194-a9d1736b1679" 00:33:10.300 ], 00:33:10.300 "product_name": "Malloc disk", 00:33:10.300 "block_size": 512, 00:33:10.300 "num_blocks": 65536, 00:33:10.300 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:10.300 "assigned_rate_limits": { 00:33:10.300 "rw_ios_per_sec": 0, 00:33:10.300 "rw_mbytes_per_sec": 0, 00:33:10.300 "r_mbytes_per_sec": 0, 00:33:10.300 "w_mbytes_per_sec": 0 00:33:10.300 }, 00:33:10.300 "claimed": false, 00:33:10.300 "zoned": false, 00:33:10.300 "supported_io_types": { 00:33:10.300 "read": true, 00:33:10.300 "write": true, 00:33:10.300 "unmap": true, 00:33:10.300 "write_zeroes": true, 00:33:10.300 "flush": true, 00:33:10.301 "reset": true, 00:33:10.301 "compare": false, 00:33:10.301 "compare_and_write": false, 00:33:10.301 "abort": true, 00:33:10.301 "nvme_admin": false, 00:33:10.301 "nvme_io": false 00:33:10.301 }, 00:33:10.301 "memory_domains": [ 00:33:10.301 { 00:33:10.301 "dma_device_id": "system", 00:33:10.301 "dma_device_type": 1 00:33:10.301 }, 00:33:10.301 { 00:33:10.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.301 "dma_device_type": 2 00:33:10.301 } 00:33:10.301 ], 00:33:10.301 "driver_specific": {} 00:33:10.301 } 00:33:10.301 ] 00:33:10.301 11:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:10.301 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:10.301 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:10.301 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:10.558 [2024-05-15 11:26:29.069132] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:10.558 [2024-05-15 11:26:29.069218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:10.559 [2024-05-15 11:26:29.069261] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:10.559 [2024-05-15 11:26:29.070977] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:10.559 [2024-05-15 11:26:29.071027] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.559 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.816 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:10.816 "name": "Existed_Raid", 00:33:10.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.816 "strip_size_kb": 0, 00:33:10.816 "state": "configuring", 00:33:10.816 "raid_level": "raid1", 00:33:10.816 "superblock": false, 00:33:10.816 "num_base_bdevs": 4, 00:33:10.816 "num_base_bdevs_discovered": 3, 00:33:10.816 "num_base_bdevs_operational": 4, 00:33:10.816 "base_bdevs_list": [ 00:33:10.816 { 00:33:10.816 "name": "BaseBdev1", 00:33:10.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.816 "is_configured": false, 00:33:10.816 "data_offset": 0, 00:33:10.816 "data_size": 0 00:33:10.816 }, 00:33:10.816 { 00:33:10.816 "name": "BaseBdev2", 00:33:10.816 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:10.816 "is_configured": true, 00:33:10.816 "data_offset": 0, 00:33:10.816 "data_size": 65536 00:33:10.816 }, 00:33:10.816 { 00:33:10.816 "name": "BaseBdev3", 00:33:10.816 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:10.816 "is_configured": true, 00:33:10.816 "data_offset": 0, 00:33:10.816 "data_size": 65536 00:33:10.816 }, 00:33:10.816 { 00:33:10.816 "name": "BaseBdev4", 00:33:10.816 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:10.816 "is_configured": true, 00:33:10.816 "data_offset": 0, 00:33:10.816 "data_size": 65536 00:33:10.816 } 00:33:10.816 ] 00:33:10.816 }' 00:33:10.816 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:10.816 11:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.379 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:11.636 [2024-05-15 11:26:30.141300] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.636 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.892 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:11.892 "name": "Existed_Raid", 00:33:11.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.892 "strip_size_kb": 0, 00:33:11.892 "state": "configuring", 00:33:11.892 "raid_level": "raid1", 00:33:11.892 "superblock": false, 00:33:11.892 "num_base_bdevs": 4, 00:33:11.892 "num_base_bdevs_discovered": 2, 00:33:11.892 "num_base_bdevs_operational": 4, 00:33:11.892 "base_bdevs_list": [ 00:33:11.892 { 00:33:11.892 "name": "BaseBdev1", 00:33:11.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.892 "is_configured": false, 00:33:11.892 "data_offset": 0, 00:33:11.892 "data_size": 0 00:33:11.892 }, 00:33:11.892 { 00:33:11.892 "name": null, 00:33:11.892 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:11.892 "is_configured": false, 00:33:11.892 "data_offset": 0, 00:33:11.892 "data_size": 65536 00:33:11.892 }, 00:33:11.892 { 00:33:11.892 "name": "BaseBdev3", 00:33:11.892 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:11.892 "is_configured": true, 00:33:11.892 "data_offset": 0, 00:33:11.892 "data_size": 65536 00:33:11.892 }, 00:33:11.892 { 00:33:11.892 "name": "BaseBdev4", 00:33:11.892 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:11.892 "is_configured": true, 00:33:11.892 "data_offset": 0, 00:33:11.892 "data_size": 65536 00:33:11.892 } 00:33:11.892 ] 00:33:11.892 }' 00:33:11.892 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:11.892 11:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.457 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.457 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:12.714 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:33:12.714 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:12.972 BaseBdev1 00:33:12.972 [2024-05-15 11:26:31.472208] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:12.972 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:13.235 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:13.493 [ 00:33:13.493 { 00:33:13.493 "name": "BaseBdev1", 00:33:13.493 "aliases": [ 00:33:13.493 "71d16eea-6e44-4243-b237-4bd31decc38b" 00:33:13.493 ], 00:33:13.493 "product_name": "Malloc disk", 00:33:13.493 "block_size": 512, 00:33:13.493 "num_blocks": 65536, 00:33:13.493 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:13.493 "assigned_rate_limits": { 00:33:13.493 "rw_ios_per_sec": 0, 00:33:13.493 "rw_mbytes_per_sec": 0, 00:33:13.493 "r_mbytes_per_sec": 0, 00:33:13.493 "w_mbytes_per_sec": 0 00:33:13.493 }, 00:33:13.493 "claimed": true, 00:33:13.493 "claim_type": "exclusive_write", 00:33:13.493 "zoned": false, 00:33:13.493 "supported_io_types": { 00:33:13.493 "read": true, 00:33:13.493 "write": true, 00:33:13.493 "unmap": true, 00:33:13.493 "write_zeroes": true, 00:33:13.493 "flush": true, 00:33:13.493 "reset": true, 00:33:13.493 "compare": false, 00:33:13.493 "compare_and_write": false, 00:33:13.493 "abort": true, 00:33:13.493 "nvme_admin": false, 00:33:13.493 "nvme_io": false 00:33:13.493 }, 00:33:13.493 "memory_domains": [ 00:33:13.493 { 00:33:13.493 "dma_device_id": "system", 00:33:13.493 "dma_device_type": 1 00:33:13.493 }, 00:33:13.493 { 00:33:13.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.493 "dma_device_type": 2 00:33:13.493 } 00:33:13.493 ], 00:33:13.493 "driver_specific": {} 00:33:13.493 } 00:33:13.493 ] 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.493 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.751 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:13.751 "name": "Existed_Raid", 00:33:13.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.751 "strip_size_kb": 0, 00:33:13.751 "state": "configuring", 00:33:13.751 "raid_level": "raid1", 00:33:13.751 "superblock": false, 00:33:13.751 "num_base_bdevs": 4, 00:33:13.751 "num_base_bdevs_discovered": 3, 00:33:13.751 "num_base_bdevs_operational": 4, 00:33:13.751 "base_bdevs_list": [ 00:33:13.751 { 00:33:13.751 "name": "BaseBdev1", 00:33:13.751 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:13.751 "is_configured": true, 00:33:13.751 "data_offset": 0, 00:33:13.751 "data_size": 65536 00:33:13.751 }, 00:33:13.751 { 00:33:13.751 "name": null, 00:33:13.751 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:13.751 "is_configured": false, 00:33:13.751 "data_offset": 0, 00:33:13.751 "data_size": 65536 00:33:13.751 }, 00:33:13.751 { 00:33:13.751 "name": "BaseBdev3", 00:33:13.751 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:13.751 "is_configured": true, 00:33:13.751 "data_offset": 0, 00:33:13.751 "data_size": 65536 00:33:13.751 }, 00:33:13.751 { 00:33:13.751 "name": "BaseBdev4", 00:33:13.751 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:13.751 "is_configured": true, 00:33:13.751 "data_offset": 0, 00:33:13.751 "data_size": 65536 00:33:13.751 } 00:33:13.751 ] 00:33:13.751 }' 00:33:13.751 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:13.751 11:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.319 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.319 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:14.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:14.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:14.578 [2024-05-15 11:26:33.168540] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.578 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.837 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:14.837 "name": "Existed_Raid", 00:33:14.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.837 "strip_size_kb": 0, 00:33:14.837 "state": "configuring", 00:33:14.837 "raid_level": "raid1", 00:33:14.837 "superblock": false, 00:33:14.837 "num_base_bdevs": 4, 00:33:14.837 "num_base_bdevs_discovered": 2, 00:33:14.837 "num_base_bdevs_operational": 4, 00:33:14.837 "base_bdevs_list": [ 00:33:14.837 { 00:33:14.837 "name": "BaseBdev1", 00:33:14.837 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:14.837 "is_configured": true, 00:33:14.837 "data_offset": 0, 00:33:14.837 "data_size": 65536 00:33:14.837 }, 00:33:14.837 { 00:33:14.837 "name": null, 00:33:14.837 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:14.837 "is_configured": false, 00:33:14.837 "data_offset": 0, 00:33:14.837 "data_size": 65536 00:33:14.837 }, 00:33:14.837 { 00:33:14.837 "name": null, 00:33:14.837 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:14.837 "is_configured": false, 00:33:14.837 "data_offset": 0, 00:33:14.837 "data_size": 65536 00:33:14.837 }, 00:33:14.837 { 00:33:14.837 "name": "BaseBdev4", 00:33:14.837 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:14.837 "is_configured": true, 00:33:14.837 "data_offset": 0, 00:33:14.837 "data_size": 65536 00:33:14.837 } 00:33:14.837 ] 00:33:14.837 }' 00:33:14.837 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:14.837 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.772 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:15.772 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.772 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:33:15.772 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:16.030 [2024-05-15 11:26:34.516884] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.030 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.289 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:16.289 "name": "Existed_Raid", 00:33:16.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.289 "strip_size_kb": 0, 00:33:16.289 "state": "configuring", 00:33:16.289 "raid_level": "raid1", 00:33:16.289 "superblock": false, 00:33:16.289 "num_base_bdevs": 4, 00:33:16.289 "num_base_bdevs_discovered": 3, 00:33:16.289 "num_base_bdevs_operational": 4, 00:33:16.289 "base_bdevs_list": [ 00:33:16.289 { 00:33:16.289 "name": "BaseBdev1", 00:33:16.289 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:16.289 "is_configured": true, 00:33:16.289 "data_offset": 0, 00:33:16.289 "data_size": 65536 00:33:16.289 }, 00:33:16.289 { 00:33:16.289 "name": null, 00:33:16.289 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:16.289 "is_configured": false, 00:33:16.289 "data_offset": 0, 00:33:16.289 "data_size": 65536 00:33:16.289 }, 00:33:16.289 { 00:33:16.289 "name": "BaseBdev3", 00:33:16.289 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:16.289 "is_configured": true, 00:33:16.289 "data_offset": 0, 00:33:16.289 "data_size": 65536 00:33:16.289 }, 00:33:16.289 { 00:33:16.289 "name": "BaseBdev4", 00:33:16.289 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:16.289 "is_configured": true, 00:33:16.289 "data_offset": 0, 00:33:16.289 "data_size": 65536 00:33:16.289 } 00:33:16.289 ] 00:33:16.289 }' 00:33:16.289 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:16.289 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.855 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.855 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:17.114 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:33:17.114 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:17.373 [2024-05-15 11:26:35.785076] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.373 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.644 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:17.644 "name": "Existed_Raid", 00:33:17.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.645 "strip_size_kb": 0, 00:33:17.645 "state": "configuring", 00:33:17.645 "raid_level": "raid1", 00:33:17.645 "superblock": false, 00:33:17.645 "num_base_bdevs": 4, 00:33:17.645 "num_base_bdevs_discovered": 2, 00:33:17.645 "num_base_bdevs_operational": 4, 00:33:17.645 "base_bdevs_list": [ 00:33:17.645 { 00:33:17.645 "name": null, 00:33:17.645 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:17.645 "is_configured": false, 00:33:17.645 "data_offset": 0, 00:33:17.645 "data_size": 65536 00:33:17.645 }, 00:33:17.645 { 00:33:17.645 "name": null, 00:33:17.645 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:17.645 "is_configured": false, 00:33:17.645 "data_offset": 0, 00:33:17.645 "data_size": 65536 00:33:17.645 }, 00:33:17.645 { 00:33:17.645 "name": "BaseBdev3", 00:33:17.645 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:17.645 "is_configured": true, 00:33:17.645 "data_offset": 0, 00:33:17.645 "data_size": 65536 00:33:17.645 }, 00:33:17.645 { 00:33:17.645 "name": "BaseBdev4", 00:33:17.645 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:17.645 "is_configured": true, 00:33:17.645 "data_offset": 0, 00:33:17.645 "data_size": 65536 00:33:17.645 } 00:33:17.645 ] 00:33:17.645 }' 00:33:17.645 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:17.645 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:18.219 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.219 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:18.477 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:33:18.477 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:18.735 [2024-05-15 11:26:37.213629] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:18.735 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:18.736 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.736 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.994 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:18.994 "name": "Existed_Raid", 00:33:18.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.994 "strip_size_kb": 0, 00:33:18.994 "state": "configuring", 00:33:18.994 "raid_level": "raid1", 00:33:18.994 "superblock": false, 00:33:18.994 "num_base_bdevs": 4, 00:33:18.994 "num_base_bdevs_discovered": 3, 00:33:18.994 "num_base_bdevs_operational": 4, 00:33:18.994 "base_bdevs_list": [ 00:33:18.994 { 00:33:18.994 "name": null, 00:33:18.994 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:18.994 "is_configured": false, 00:33:18.994 "data_offset": 0, 00:33:18.994 "data_size": 65536 00:33:18.994 }, 00:33:18.994 { 00:33:18.994 "name": "BaseBdev2", 00:33:18.994 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:18.994 "is_configured": true, 00:33:18.994 "data_offset": 0, 00:33:18.994 "data_size": 65536 00:33:18.994 }, 00:33:18.994 { 00:33:18.994 "name": "BaseBdev3", 00:33:18.994 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:18.994 "is_configured": true, 00:33:18.994 "data_offset": 0, 00:33:18.994 "data_size": 65536 00:33:18.994 }, 00:33:18.994 { 00:33:18.994 "name": "BaseBdev4", 00:33:18.994 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:18.994 "is_configured": true, 00:33:18.994 "data_offset": 0, 00:33:18.994 "data_size": 65536 00:33:18.994 } 00:33:18.994 ] 00:33:18.994 }' 00:33:18.994 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:18.994 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.561 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.561 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:19.819 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:33:19.819 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.819 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:20.077 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 71d16eea-6e44-4243-b237-4bd31decc38b 00:33:20.336 [2024-05-15 11:26:38.798008] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:20.336 [2024-05-15 11:26:38.798059] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:33:20.336 [2024-05-15 11:26:38.798070] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:20.336 [2024-05-15 11:26:38.798186] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:33:20.336 [2024-05-15 11:26:38.798404] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:33:20.336 [2024-05-15 11:26:38.798420] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:33:20.336 [2024-05-15 11:26:38.798648] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.336 NewBaseBdev 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:20.336 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:20.594 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:20.853 [ 00:33:20.853 { 00:33:20.853 "name": "NewBaseBdev", 00:33:20.854 "aliases": [ 00:33:20.854 "71d16eea-6e44-4243-b237-4bd31decc38b" 00:33:20.854 ], 00:33:20.854 "product_name": "Malloc disk", 00:33:20.854 "block_size": 512, 00:33:20.854 "num_blocks": 65536, 00:33:20.854 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:20.854 "assigned_rate_limits": { 00:33:20.854 "rw_ios_per_sec": 0, 00:33:20.854 "rw_mbytes_per_sec": 0, 00:33:20.854 "r_mbytes_per_sec": 0, 00:33:20.854 "w_mbytes_per_sec": 0 00:33:20.854 }, 00:33:20.854 "claimed": true, 00:33:20.854 "claim_type": "exclusive_write", 00:33:20.854 "zoned": false, 00:33:20.854 "supported_io_types": { 00:33:20.854 "read": true, 00:33:20.854 "write": true, 00:33:20.854 "unmap": true, 00:33:20.854 "write_zeroes": true, 00:33:20.854 "flush": true, 00:33:20.854 "reset": true, 00:33:20.854 "compare": false, 00:33:20.854 "compare_and_write": false, 00:33:20.854 "abort": true, 00:33:20.854 "nvme_admin": false, 00:33:20.854 "nvme_io": false 00:33:20.854 }, 00:33:20.854 "memory_domains": [ 00:33:20.854 { 00:33:20.854 "dma_device_id": "system", 00:33:20.854 "dma_device_type": 1 00:33:20.854 }, 00:33:20.854 { 00:33:20.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.854 "dma_device_type": 2 00:33:20.854 } 00:33:20.854 ], 00:33:20.854 "driver_specific": {} 00:33:20.854 } 00:33:20.854 ] 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.854 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.113 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:21.113 "name": "Existed_Raid", 00:33:21.113 "uuid": "299d0032-25c6-43bd-ac45-1d7104998551", 00:33:21.113 "strip_size_kb": 0, 00:33:21.113 "state": "online", 00:33:21.113 "raid_level": "raid1", 00:33:21.113 "superblock": false, 00:33:21.113 "num_base_bdevs": 4, 00:33:21.113 "num_base_bdevs_discovered": 4, 00:33:21.113 "num_base_bdevs_operational": 4, 00:33:21.113 "base_bdevs_list": [ 00:33:21.113 { 00:33:21.113 "name": "NewBaseBdev", 00:33:21.113 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:21.113 "is_configured": true, 00:33:21.113 "data_offset": 0, 00:33:21.113 "data_size": 65536 00:33:21.113 }, 00:33:21.113 { 00:33:21.113 "name": "BaseBdev2", 00:33:21.113 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:21.113 "is_configured": true, 00:33:21.113 "data_offset": 0, 00:33:21.113 "data_size": 65536 00:33:21.113 }, 00:33:21.113 { 00:33:21.113 "name": "BaseBdev3", 00:33:21.113 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:21.113 "is_configured": true, 00:33:21.113 "data_offset": 0, 00:33:21.113 "data_size": 65536 00:33:21.113 }, 00:33:21.113 { 00:33:21.113 "name": "BaseBdev4", 00:33:21.113 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:21.113 "is_configured": true, 00:33:21.113 "data_offset": 0, 00:33:21.113 "data_size": 65536 00:33:21.113 } 00:33:21.113 ] 00:33:21.113 }' 00:33:21.113 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:21.113 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:33:21.680 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:33:21.938 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:21.938 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:33:21.938 [2024-05-15 11:26:40.550594] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:21.938 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:33:21.938 "name": "Existed_Raid", 00:33:21.938 "aliases": [ 00:33:21.938 "299d0032-25c6-43bd-ac45-1d7104998551" 00:33:21.938 ], 00:33:21.938 "product_name": "Raid Volume", 00:33:21.938 "block_size": 512, 00:33:21.938 "num_blocks": 65536, 00:33:21.938 "uuid": "299d0032-25c6-43bd-ac45-1d7104998551", 00:33:21.938 "assigned_rate_limits": { 00:33:21.938 "rw_ios_per_sec": 0, 00:33:21.938 "rw_mbytes_per_sec": 0, 00:33:21.938 "r_mbytes_per_sec": 0, 00:33:21.938 "w_mbytes_per_sec": 0 00:33:21.938 }, 00:33:21.938 "claimed": false, 00:33:21.938 "zoned": false, 00:33:21.938 "supported_io_types": { 00:33:21.938 "read": true, 00:33:21.938 "write": true, 00:33:21.938 "unmap": false, 00:33:21.938 "write_zeroes": true, 00:33:21.938 "flush": false, 00:33:21.938 "reset": true, 00:33:21.938 "compare": false, 00:33:21.938 "compare_and_write": false, 00:33:21.938 "abort": false, 00:33:21.939 "nvme_admin": false, 00:33:21.939 "nvme_io": false 00:33:21.939 }, 00:33:21.939 "memory_domains": [ 00:33:21.939 { 00:33:21.939 "dma_device_id": "system", 00:33:21.939 "dma_device_type": 1 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.939 "dma_device_type": 2 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "system", 00:33:21.939 "dma_device_type": 1 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.939 "dma_device_type": 2 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "system", 00:33:21.939 "dma_device_type": 1 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.939 "dma_device_type": 2 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "system", 00:33:21.939 "dma_device_type": 1 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.939 "dma_device_type": 2 00:33:21.939 } 00:33:21.939 ], 00:33:21.939 "driver_specific": { 00:33:21.939 "raid": { 00:33:21.939 "uuid": "299d0032-25c6-43bd-ac45-1d7104998551", 00:33:21.939 "strip_size_kb": 0, 00:33:21.939 "state": "online", 00:33:21.939 "raid_level": "raid1", 00:33:21.939 "superblock": false, 00:33:21.939 "num_base_bdevs": 4, 00:33:21.939 "num_base_bdevs_discovered": 4, 00:33:21.939 "num_base_bdevs_operational": 4, 00:33:21.939 "base_bdevs_list": [ 00:33:21.939 { 00:33:21.939 "name": "NewBaseBdev", 00:33:21.939 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:21.939 "is_configured": true, 00:33:21.939 "data_offset": 0, 00:33:21.939 "data_size": 65536 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "name": "BaseBdev2", 00:33:21.939 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:21.939 "is_configured": true, 00:33:21.939 "data_offset": 0, 00:33:21.939 "data_size": 65536 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "name": "BaseBdev3", 00:33:21.939 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:21.939 "is_configured": true, 00:33:21.939 "data_offset": 0, 00:33:21.939 "data_size": 65536 00:33:21.939 }, 00:33:21.939 { 00:33:21.939 "name": "BaseBdev4", 00:33:21.939 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:21.939 "is_configured": true, 00:33:21.939 "data_offset": 0, 00:33:21.939 "data_size": 65536 00:33:21.939 } 00:33:21.939 ] 00:33:21.939 } 00:33:21.939 } 00:33:21.939 }' 00:33:21.939 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:22.197 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:33:22.197 BaseBdev2 00:33:22.197 BaseBdev3 00:33:22.197 BaseBdev4' 00:33:22.197 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:22.197 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:22.197 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:22.456 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:22.456 "name": "NewBaseBdev", 00:33:22.456 "aliases": [ 00:33:22.456 "71d16eea-6e44-4243-b237-4bd31decc38b" 00:33:22.456 ], 00:33:22.456 "product_name": "Malloc disk", 00:33:22.456 "block_size": 512, 00:33:22.456 "num_blocks": 65536, 00:33:22.456 "uuid": "71d16eea-6e44-4243-b237-4bd31decc38b", 00:33:22.456 "assigned_rate_limits": { 00:33:22.456 "rw_ios_per_sec": 0, 00:33:22.456 "rw_mbytes_per_sec": 0, 00:33:22.456 "r_mbytes_per_sec": 0, 00:33:22.456 "w_mbytes_per_sec": 0 00:33:22.456 }, 00:33:22.456 "claimed": true, 00:33:22.456 "claim_type": "exclusive_write", 00:33:22.456 "zoned": false, 00:33:22.456 "supported_io_types": { 00:33:22.456 "read": true, 00:33:22.456 "write": true, 00:33:22.456 "unmap": true, 00:33:22.456 "write_zeroes": true, 00:33:22.456 "flush": true, 00:33:22.456 "reset": true, 00:33:22.456 "compare": false, 00:33:22.456 "compare_and_write": false, 00:33:22.456 "abort": true, 00:33:22.456 "nvme_admin": false, 00:33:22.456 "nvme_io": false 00:33:22.456 }, 00:33:22.456 "memory_domains": [ 00:33:22.456 { 00:33:22.456 "dma_device_id": "system", 00:33:22.456 "dma_device_type": 1 00:33:22.456 }, 00:33:22.456 { 00:33:22.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.456 "dma_device_type": 2 00:33:22.456 } 00:33:22.456 ], 00:33:22.456 "driver_specific": {} 00:33:22.456 }' 00:33:22.456 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:22.456 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:22.456 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:22.456 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:22.456 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:22.714 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:22.714 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:22.714 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:22.714 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:22.715 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:22.715 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:22.973 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:22.973 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:22.973 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:22.973 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:23.231 "name": "BaseBdev2", 00:33:23.231 "aliases": [ 00:33:23.231 "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d" 00:33:23.231 ], 00:33:23.231 "product_name": "Malloc disk", 00:33:23.231 "block_size": 512, 00:33:23.231 "num_blocks": 65536, 00:33:23.231 "uuid": "eaa1d46b-2d24-43d2-858f-3ce6ec6d817d", 00:33:23.231 "assigned_rate_limits": { 00:33:23.231 "rw_ios_per_sec": 0, 00:33:23.231 "rw_mbytes_per_sec": 0, 00:33:23.231 "r_mbytes_per_sec": 0, 00:33:23.231 "w_mbytes_per_sec": 0 00:33:23.231 }, 00:33:23.231 "claimed": true, 00:33:23.231 "claim_type": "exclusive_write", 00:33:23.231 "zoned": false, 00:33:23.231 "supported_io_types": { 00:33:23.231 "read": true, 00:33:23.231 "write": true, 00:33:23.231 "unmap": true, 00:33:23.231 "write_zeroes": true, 00:33:23.231 "flush": true, 00:33:23.231 "reset": true, 00:33:23.231 "compare": false, 00:33:23.231 "compare_and_write": false, 00:33:23.231 "abort": true, 00:33:23.231 "nvme_admin": false, 00:33:23.231 "nvme_io": false 00:33:23.231 }, 00:33:23.231 "memory_domains": [ 00:33:23.231 { 00:33:23.231 "dma_device_id": "system", 00:33:23.231 "dma_device_type": 1 00:33:23.231 }, 00:33:23.231 { 00:33:23.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.231 "dma_device_type": 2 00:33:23.231 } 00:33:23.231 ], 00:33:23.231 "driver_specific": {} 00:33:23.231 }' 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:23.231 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:23.490 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:23.490 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:23.490 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:23.490 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:23.490 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:23.490 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:23.490 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:23.490 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:23.756 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:23.756 "name": "BaseBdev3", 00:33:23.756 "aliases": [ 00:33:23.756 "e2834f3c-129f-48b2-834e-eb8ec6951501" 00:33:23.756 ], 00:33:23.756 "product_name": "Malloc disk", 00:33:23.756 "block_size": 512, 00:33:23.756 "num_blocks": 65536, 00:33:23.756 "uuid": "e2834f3c-129f-48b2-834e-eb8ec6951501", 00:33:23.756 "assigned_rate_limits": { 00:33:23.756 "rw_ios_per_sec": 0, 00:33:23.756 "rw_mbytes_per_sec": 0, 00:33:23.756 "r_mbytes_per_sec": 0, 00:33:23.756 "w_mbytes_per_sec": 0 00:33:23.756 }, 00:33:23.756 "claimed": true, 00:33:23.756 "claim_type": "exclusive_write", 00:33:23.756 "zoned": false, 00:33:23.756 "supported_io_types": { 00:33:23.756 "read": true, 00:33:23.756 "write": true, 00:33:23.756 "unmap": true, 00:33:23.756 "write_zeroes": true, 00:33:23.756 "flush": true, 00:33:23.756 "reset": true, 00:33:23.756 "compare": false, 00:33:23.756 "compare_and_write": false, 00:33:23.756 "abort": true, 00:33:23.756 "nvme_admin": false, 00:33:23.756 "nvme_io": false 00:33:23.756 }, 00:33:23.756 "memory_domains": [ 00:33:23.756 { 00:33:23.757 "dma_device_id": "system", 00:33:23.757 "dma_device_type": 1 00:33:23.757 }, 00:33:23.757 { 00:33:23.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.757 "dma_device_type": 2 00:33:23.757 } 00:33:23.757 ], 00:33:23.757 "driver_specific": {} 00:33:23.757 }' 00:33:23.757 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:24.015 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:24.273 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:24.532 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:24.532 "name": "BaseBdev4", 00:33:24.532 "aliases": [ 00:33:24.532 "f3c2b343-804f-4e09-a194-a9d1736b1679" 00:33:24.532 ], 00:33:24.532 "product_name": "Malloc disk", 00:33:24.532 "block_size": 512, 00:33:24.532 "num_blocks": 65536, 00:33:24.532 "uuid": "f3c2b343-804f-4e09-a194-a9d1736b1679", 00:33:24.532 "assigned_rate_limits": { 00:33:24.532 "rw_ios_per_sec": 0, 00:33:24.532 "rw_mbytes_per_sec": 0, 00:33:24.532 "r_mbytes_per_sec": 0, 00:33:24.532 "w_mbytes_per_sec": 0 00:33:24.532 }, 00:33:24.532 "claimed": true, 00:33:24.532 "claim_type": "exclusive_write", 00:33:24.532 "zoned": false, 00:33:24.532 "supported_io_types": { 00:33:24.532 "read": true, 00:33:24.532 "write": true, 00:33:24.532 "unmap": true, 00:33:24.532 "write_zeroes": true, 00:33:24.532 "flush": true, 00:33:24.532 "reset": true, 00:33:24.532 "compare": false, 00:33:24.532 "compare_and_write": false, 00:33:24.532 "abort": true, 00:33:24.532 "nvme_admin": false, 00:33:24.532 "nvme_io": false 00:33:24.532 }, 00:33:24.532 "memory_domains": [ 00:33:24.532 { 00:33:24.532 "dma_device_id": "system", 00:33:24.532 "dma_device_type": 1 00:33:24.532 }, 00:33:24.532 { 00:33:24.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:24.532 "dma_device_type": 2 00:33:24.532 } 00:33:24.532 ], 00:33:24.532 "driver_specific": {} 00:33:24.532 }' 00:33:24.532 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:24.532 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:24.532 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:24.532 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:24.789 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:25.046 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:25.046 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:25.305 [2024-05-15 11:26:43.711115] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:25.305 [2024-05-15 11:26:43.711157] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:25.305 [2024-05-15 11:26:43.711226] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.305 [2024-05-15 11:26:43.711418] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.305 [2024-05-15 11:26:43.711444] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 70049 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 70049 ']' 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 70049 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70049 00:33:25.305 killing process with pid 70049 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70049' 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 70049 00:33:25.305 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 70049 00:33:25.305 [2024-05-15 11:26:43.748444] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:25.563 [2024-05-15 11:26:44.059102] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:33:26.941 00:33:26.941 real 0m35.441s 00:33:26.941 user 1m6.903s 00:33:26.941 sys 0m3.593s 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:26.941 ************************************ 00:33:26.941 END TEST raid_state_function_test 00:33:26.941 ************************************ 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.941 11:26:45 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:33:26.941 11:26:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:33:26.941 11:26:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:26.941 11:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:26.941 ************************************ 00:33:26.941 START TEST raid_state_function_test_sb 00:33:26.941 ************************************ 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:33:26.941 Process raid pid: 71178 00:33:26.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=71178 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 71178' 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 71178 /var/tmp/spdk-raid.sock 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 71178 ']' 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.941 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:26.941 [2024-05-15 11:26:45.416098] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:33:26.941 [2024-05-15 11:26:45.416293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.200 [2024-05-15 11:26:45.583430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.200 [2024-05-15 11:26:45.808743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.459 [2024-05-15 11:26:46.006894] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:27.718 11:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.718 11:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:33:27.718 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:27.977 [2024-05-15 11:26:46.496425] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:27.977 [2024-05-15 11:26:46.496531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:27.977 [2024-05-15 11:26:46.496563] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:27.977 [2024-05-15 11:26:46.496587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:27.977 [2024-05-15 11:26:46.496598] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:27.977 [2024-05-15 11:26:46.496653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:27.977 [2024-05-15 11:26:46.496667] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:27.977 [2024-05-15 11:26:46.496696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.977 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:28.326 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:28.326 "name": "Existed_Raid", 00:33:28.326 "uuid": "f09fba03-1c48-47f4-9622-e4a89fa9228c", 00:33:28.326 "strip_size_kb": 0, 00:33:28.326 "state": "configuring", 00:33:28.326 "raid_level": "raid1", 00:33:28.326 "superblock": true, 00:33:28.326 "num_base_bdevs": 4, 00:33:28.326 "num_base_bdevs_discovered": 0, 00:33:28.326 "num_base_bdevs_operational": 4, 00:33:28.326 "base_bdevs_list": [ 00:33:28.326 { 00:33:28.326 "name": "BaseBdev1", 00:33:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.326 "is_configured": false, 00:33:28.326 "data_offset": 0, 00:33:28.326 "data_size": 0 00:33:28.326 }, 00:33:28.326 { 00:33:28.326 "name": "BaseBdev2", 00:33:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.326 "is_configured": false, 00:33:28.326 "data_offset": 0, 00:33:28.326 "data_size": 0 00:33:28.326 }, 00:33:28.326 { 00:33:28.326 "name": "BaseBdev3", 00:33:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.326 "is_configured": false, 00:33:28.326 "data_offset": 0, 00:33:28.326 "data_size": 0 00:33:28.326 }, 00:33:28.326 { 00:33:28.326 "name": "BaseBdev4", 00:33:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.326 "is_configured": false, 00:33:28.326 "data_offset": 0, 00:33:28.326 "data_size": 0 00:33:28.326 } 00:33:28.326 ] 00:33:28.326 }' 00:33:28.326 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:28.326 11:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:28.894 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:29.153 [2024-05-15 11:26:47.728414] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:29.153 [2024-05-15 11:26:47.728460] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:33:29.153 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:29.411 [2024-05-15 11:26:47.980483] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:29.411 [2024-05-15 11:26:47.980561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:29.411 [2024-05-15 11:26:47.980593] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:29.411 [2024-05-15 11:26:47.980620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:29.411 [2024-05-15 11:26:47.980630] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:29.411 [2024-05-15 11:26:47.980647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:29.411 [2024-05-15 11:26:47.980654] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:29.411 [2024-05-15 11:26:47.980679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:29.411 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:29.670 [2024-05-15 11:26:48.261056] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:29.670 BaseBdev1 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:29.670 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:29.928 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:30.189 [ 00:33:30.189 { 00:33:30.189 "name": "BaseBdev1", 00:33:30.189 "aliases": [ 00:33:30.189 "c3fbf0ae-344b-4ab0-84e8-a658b73833a2" 00:33:30.189 ], 00:33:30.189 "product_name": "Malloc disk", 00:33:30.189 "block_size": 512, 00:33:30.189 "num_blocks": 65536, 00:33:30.189 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:30.189 "assigned_rate_limits": { 00:33:30.189 "rw_ios_per_sec": 0, 00:33:30.189 "rw_mbytes_per_sec": 0, 00:33:30.189 "r_mbytes_per_sec": 0, 00:33:30.189 "w_mbytes_per_sec": 0 00:33:30.189 }, 00:33:30.189 "claimed": true, 00:33:30.189 "claim_type": "exclusive_write", 00:33:30.189 "zoned": false, 00:33:30.189 "supported_io_types": { 00:33:30.189 "read": true, 00:33:30.189 "write": true, 00:33:30.189 "unmap": true, 00:33:30.190 "write_zeroes": true, 00:33:30.190 "flush": true, 00:33:30.190 "reset": true, 00:33:30.190 "compare": false, 00:33:30.190 "compare_and_write": false, 00:33:30.190 "abort": true, 00:33:30.190 "nvme_admin": false, 00:33:30.190 "nvme_io": false 00:33:30.190 }, 00:33:30.190 "memory_domains": [ 00:33:30.190 { 00:33:30.190 "dma_device_id": "system", 00:33:30.190 "dma_device_type": 1 00:33:30.190 }, 00:33:30.190 { 00:33:30.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:30.190 "dma_device_type": 2 00:33:30.190 } 00:33:30.190 ], 00:33:30.190 "driver_specific": {} 00:33:30.190 } 00:33:30.190 ] 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.190 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:30.449 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:30.449 "name": "Existed_Raid", 00:33:30.449 "uuid": "9b37d7bd-8a89-4b1f-9fee-bf43be1d9ba2", 00:33:30.449 "strip_size_kb": 0, 00:33:30.449 "state": "configuring", 00:33:30.449 "raid_level": "raid1", 00:33:30.449 "superblock": true, 00:33:30.449 "num_base_bdevs": 4, 00:33:30.449 "num_base_bdevs_discovered": 1, 00:33:30.449 "num_base_bdevs_operational": 4, 00:33:30.449 "base_bdevs_list": [ 00:33:30.449 { 00:33:30.449 "name": "BaseBdev1", 00:33:30.449 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:30.449 "is_configured": true, 00:33:30.449 "data_offset": 2048, 00:33:30.449 "data_size": 63488 00:33:30.449 }, 00:33:30.449 { 00:33:30.449 "name": "BaseBdev2", 00:33:30.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.449 "is_configured": false, 00:33:30.449 "data_offset": 0, 00:33:30.449 "data_size": 0 00:33:30.449 }, 00:33:30.449 { 00:33:30.449 "name": "BaseBdev3", 00:33:30.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.449 "is_configured": false, 00:33:30.449 "data_offset": 0, 00:33:30.449 "data_size": 0 00:33:30.449 }, 00:33:30.449 { 00:33:30.449 "name": "BaseBdev4", 00:33:30.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.449 "is_configured": false, 00:33:30.449 "data_offset": 0, 00:33:30.449 "data_size": 0 00:33:30.449 } 00:33:30.449 ] 00:33:30.449 }' 00:33:30.449 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:30.449 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.386 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:31.386 [2024-05-15 11:26:49.981339] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:31.387 [2024-05-15 11:26:49.981400] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:33:31.387 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:31.646 [2024-05-15 11:26:50.229487] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:31.647 [2024-05-15 11:26:50.231196] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:31.647 [2024-05-15 11:26:50.231266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:31.647 [2024-05-15 11:26:50.231290] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:31.647 [2024-05-15 11:26:50.231317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:31.647 [2024-05-15 11:26:50.231327] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:31.647 [2024-05-15 11:26:50.231343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.647 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:31.906 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:31.906 "name": "Existed_Raid", 00:33:31.906 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:31.906 "strip_size_kb": 0, 00:33:31.906 "state": "configuring", 00:33:31.906 "raid_level": "raid1", 00:33:31.906 "superblock": true, 00:33:31.906 "num_base_bdevs": 4, 00:33:31.906 "num_base_bdevs_discovered": 1, 00:33:31.906 "num_base_bdevs_operational": 4, 00:33:31.906 "base_bdevs_list": [ 00:33:31.906 { 00:33:31.906 "name": "BaseBdev1", 00:33:31.906 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:31.906 "is_configured": true, 00:33:31.906 "data_offset": 2048, 00:33:31.906 "data_size": 63488 00:33:31.906 }, 00:33:31.906 { 00:33:31.906 "name": "BaseBdev2", 00:33:31.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.906 "is_configured": false, 00:33:31.906 "data_offset": 0, 00:33:31.906 "data_size": 0 00:33:31.906 }, 00:33:31.906 { 00:33:31.906 "name": "BaseBdev3", 00:33:31.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.906 "is_configured": false, 00:33:31.906 "data_offset": 0, 00:33:31.906 "data_size": 0 00:33:31.906 }, 00:33:31.906 { 00:33:31.906 "name": "BaseBdev4", 00:33:31.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.906 "is_configured": false, 00:33:31.906 "data_offset": 0, 00:33:31.906 "data_size": 0 00:33:31.906 } 00:33:31.906 ] 00:33:31.906 }' 00:33:31.906 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:31.906 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:32.886 [2024-05-15 11:26:51.474774] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:32.886 BaseBdev2 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:32.886 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:33.150 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:33.408 [ 00:33:33.408 { 00:33:33.408 "name": "BaseBdev2", 00:33:33.408 "aliases": [ 00:33:33.408 "60ed970f-9e67-4bc1-8615-648309853aef" 00:33:33.408 ], 00:33:33.408 "product_name": "Malloc disk", 00:33:33.408 "block_size": 512, 00:33:33.408 "num_blocks": 65536, 00:33:33.408 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:33.408 "assigned_rate_limits": { 00:33:33.408 "rw_ios_per_sec": 0, 00:33:33.408 "rw_mbytes_per_sec": 0, 00:33:33.408 "r_mbytes_per_sec": 0, 00:33:33.408 "w_mbytes_per_sec": 0 00:33:33.408 }, 00:33:33.408 "claimed": true, 00:33:33.408 "claim_type": "exclusive_write", 00:33:33.408 "zoned": false, 00:33:33.408 "supported_io_types": { 00:33:33.408 "read": true, 00:33:33.408 "write": true, 00:33:33.408 "unmap": true, 00:33:33.408 "write_zeroes": true, 00:33:33.408 "flush": true, 00:33:33.408 "reset": true, 00:33:33.408 "compare": false, 00:33:33.408 "compare_and_write": false, 00:33:33.408 "abort": true, 00:33:33.408 "nvme_admin": false, 00:33:33.408 "nvme_io": false 00:33:33.408 }, 00:33:33.408 "memory_domains": [ 00:33:33.408 { 00:33:33.408 "dma_device_id": "system", 00:33:33.408 "dma_device_type": 1 00:33:33.408 }, 00:33:33.408 { 00:33:33.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.408 "dma_device_type": 2 00:33:33.408 } 00:33:33.408 ], 00:33:33.408 "driver_specific": {} 00:33:33.408 } 00:33:33.408 ] 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.408 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:33.666 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:33.666 "name": "Existed_Raid", 00:33:33.666 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:33.666 "strip_size_kb": 0, 00:33:33.666 "state": "configuring", 00:33:33.666 "raid_level": "raid1", 00:33:33.666 "superblock": true, 00:33:33.666 "num_base_bdevs": 4, 00:33:33.666 "num_base_bdevs_discovered": 2, 00:33:33.666 "num_base_bdevs_operational": 4, 00:33:33.666 "base_bdevs_list": [ 00:33:33.666 { 00:33:33.666 "name": "BaseBdev1", 00:33:33.666 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:33.666 "is_configured": true, 00:33:33.666 "data_offset": 2048, 00:33:33.666 "data_size": 63488 00:33:33.666 }, 00:33:33.666 { 00:33:33.666 "name": "BaseBdev2", 00:33:33.666 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:33.666 "is_configured": true, 00:33:33.666 "data_offset": 2048, 00:33:33.666 "data_size": 63488 00:33:33.666 }, 00:33:33.666 { 00:33:33.666 "name": "BaseBdev3", 00:33:33.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.666 "is_configured": false, 00:33:33.666 "data_offset": 0, 00:33:33.666 "data_size": 0 00:33:33.666 }, 00:33:33.666 { 00:33:33.666 "name": "BaseBdev4", 00:33:33.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.666 "is_configured": false, 00:33:33.666 "data_offset": 0, 00:33:33.666 "data_size": 0 00:33:33.666 } 00:33:33.666 ] 00:33:33.666 }' 00:33:33.666 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:33.666 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.231 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:34.489 [2024-05-15 11:26:53.048445] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:34.489 BaseBdev3 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:34.489 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:34.747 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:35.005 [ 00:33:35.005 { 00:33:35.005 "name": "BaseBdev3", 00:33:35.005 "aliases": [ 00:33:35.005 "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a" 00:33:35.005 ], 00:33:35.005 "product_name": "Malloc disk", 00:33:35.005 "block_size": 512, 00:33:35.005 "num_blocks": 65536, 00:33:35.005 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:35.005 "assigned_rate_limits": { 00:33:35.005 "rw_ios_per_sec": 0, 00:33:35.005 "rw_mbytes_per_sec": 0, 00:33:35.005 "r_mbytes_per_sec": 0, 00:33:35.005 "w_mbytes_per_sec": 0 00:33:35.005 }, 00:33:35.005 "claimed": true, 00:33:35.005 "claim_type": "exclusive_write", 00:33:35.005 "zoned": false, 00:33:35.005 "supported_io_types": { 00:33:35.005 "read": true, 00:33:35.005 "write": true, 00:33:35.005 "unmap": true, 00:33:35.005 "write_zeroes": true, 00:33:35.005 "flush": true, 00:33:35.005 "reset": true, 00:33:35.005 "compare": false, 00:33:35.005 "compare_and_write": false, 00:33:35.005 "abort": true, 00:33:35.005 "nvme_admin": false, 00:33:35.005 "nvme_io": false 00:33:35.005 }, 00:33:35.005 "memory_domains": [ 00:33:35.005 { 00:33:35.005 "dma_device_id": "system", 00:33:35.005 "dma_device_type": 1 00:33:35.005 }, 00:33:35.005 { 00:33:35.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.005 "dma_device_type": 2 00:33:35.005 } 00:33:35.005 ], 00:33:35.005 "driver_specific": {} 00:33:35.005 } 00:33:35.005 ] 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.005 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:35.264 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:35.264 "name": "Existed_Raid", 00:33:35.264 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:35.264 "strip_size_kb": 0, 00:33:35.264 "state": "configuring", 00:33:35.264 "raid_level": "raid1", 00:33:35.264 "superblock": true, 00:33:35.264 "num_base_bdevs": 4, 00:33:35.264 "num_base_bdevs_discovered": 3, 00:33:35.264 "num_base_bdevs_operational": 4, 00:33:35.264 "base_bdevs_list": [ 00:33:35.264 { 00:33:35.264 "name": "BaseBdev1", 00:33:35.264 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:35.264 "is_configured": true, 00:33:35.264 "data_offset": 2048, 00:33:35.264 "data_size": 63488 00:33:35.264 }, 00:33:35.264 { 00:33:35.264 "name": "BaseBdev2", 00:33:35.264 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:35.264 "is_configured": true, 00:33:35.264 "data_offset": 2048, 00:33:35.264 "data_size": 63488 00:33:35.264 }, 00:33:35.264 { 00:33:35.264 "name": "BaseBdev3", 00:33:35.264 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:35.264 "is_configured": true, 00:33:35.264 "data_offset": 2048, 00:33:35.264 "data_size": 63488 00:33:35.264 }, 00:33:35.264 { 00:33:35.264 "name": "BaseBdev4", 00:33:35.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.264 "is_configured": false, 00:33:35.264 "data_offset": 0, 00:33:35.264 "data_size": 0 00:33:35.264 } 00:33:35.264 ] 00:33:35.264 }' 00:33:35.264 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:35.264 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.832 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:36.091 [2024-05-15 11:26:54.520811] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:36.091 BaseBdev4 00:33:36.091 [2024-05-15 11:26:54.521337] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:33:36.091 [2024-05-15 11:26:54.521369] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:36.091 [2024-05-15 11:26:54.521495] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:33:36.091 [2024-05-15 11:26:54.521765] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:33:36.091 [2024-05-15 11:26:54.521779] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:33:36.091 [2024-05-15 11:26:54.521893] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:36.091 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:36.350 [ 00:33:36.350 { 00:33:36.350 "name": "BaseBdev4", 00:33:36.350 "aliases": [ 00:33:36.350 "b473f577-4263-4330-9385-343cc614f6b0" 00:33:36.350 ], 00:33:36.350 "product_name": "Malloc disk", 00:33:36.350 "block_size": 512, 00:33:36.350 "num_blocks": 65536, 00:33:36.350 "uuid": "b473f577-4263-4330-9385-343cc614f6b0", 00:33:36.350 "assigned_rate_limits": { 00:33:36.350 "rw_ios_per_sec": 0, 00:33:36.350 "rw_mbytes_per_sec": 0, 00:33:36.350 "r_mbytes_per_sec": 0, 00:33:36.350 "w_mbytes_per_sec": 0 00:33:36.350 }, 00:33:36.350 "claimed": true, 00:33:36.350 "claim_type": "exclusive_write", 00:33:36.350 "zoned": false, 00:33:36.350 "supported_io_types": { 00:33:36.350 "read": true, 00:33:36.350 "write": true, 00:33:36.350 "unmap": true, 00:33:36.350 "write_zeroes": true, 00:33:36.350 "flush": true, 00:33:36.350 "reset": true, 00:33:36.350 "compare": false, 00:33:36.350 "compare_and_write": false, 00:33:36.350 "abort": true, 00:33:36.350 "nvme_admin": false, 00:33:36.350 "nvme_io": false 00:33:36.350 }, 00:33:36.350 "memory_domains": [ 00:33:36.350 { 00:33:36.350 "dma_device_id": "system", 00:33:36.350 "dma_device_type": 1 00:33:36.350 }, 00:33:36.350 { 00:33:36.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:36.350 "dma_device_type": 2 00:33:36.350 } 00:33:36.350 ], 00:33:36.350 "driver_specific": {} 00:33:36.350 } 00:33:36.350 ] 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:36.350 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.608 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:36.608 "name": "Existed_Raid", 00:33:36.608 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:36.608 "strip_size_kb": 0, 00:33:36.608 "state": "online", 00:33:36.608 "raid_level": "raid1", 00:33:36.608 "superblock": true, 00:33:36.608 "num_base_bdevs": 4, 00:33:36.608 "num_base_bdevs_discovered": 4, 00:33:36.608 "num_base_bdevs_operational": 4, 00:33:36.608 "base_bdevs_list": [ 00:33:36.608 { 00:33:36.608 "name": "BaseBdev1", 00:33:36.608 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:36.608 "is_configured": true, 00:33:36.608 "data_offset": 2048, 00:33:36.608 "data_size": 63488 00:33:36.608 }, 00:33:36.608 { 00:33:36.608 "name": "BaseBdev2", 00:33:36.608 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:36.608 "is_configured": true, 00:33:36.608 "data_offset": 2048, 00:33:36.608 "data_size": 63488 00:33:36.608 }, 00:33:36.608 { 00:33:36.608 "name": "BaseBdev3", 00:33:36.608 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:36.608 "is_configured": true, 00:33:36.608 "data_offset": 2048, 00:33:36.608 "data_size": 63488 00:33:36.608 }, 00:33:36.608 { 00:33:36.608 "name": "BaseBdev4", 00:33:36.608 "uuid": "b473f577-4263-4330-9385-343cc614f6b0", 00:33:36.608 "is_configured": true, 00:33:36.608 "data_offset": 2048, 00:33:36.608 "data_size": 63488 00:33:36.608 } 00:33:36.608 ] 00:33:36.608 }' 00:33:36.608 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:36.608 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:37.539 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:33:37.539 [2024-05-15 11:26:56.001372] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:37.539 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:33:37.539 "name": "Existed_Raid", 00:33:37.539 "aliases": [ 00:33:37.539 "1483408a-9ec5-4f2e-9105-b87da4b00ecb" 00:33:37.539 ], 00:33:37.539 "product_name": "Raid Volume", 00:33:37.539 "block_size": 512, 00:33:37.539 "num_blocks": 63488, 00:33:37.539 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:37.539 "assigned_rate_limits": { 00:33:37.539 "rw_ios_per_sec": 0, 00:33:37.539 "rw_mbytes_per_sec": 0, 00:33:37.539 "r_mbytes_per_sec": 0, 00:33:37.539 "w_mbytes_per_sec": 0 00:33:37.539 }, 00:33:37.539 "claimed": false, 00:33:37.539 "zoned": false, 00:33:37.539 "supported_io_types": { 00:33:37.539 "read": true, 00:33:37.539 "write": true, 00:33:37.539 "unmap": false, 00:33:37.539 "write_zeroes": true, 00:33:37.539 "flush": false, 00:33:37.539 "reset": true, 00:33:37.539 "compare": false, 00:33:37.539 "compare_and_write": false, 00:33:37.539 "abort": false, 00:33:37.539 "nvme_admin": false, 00:33:37.539 "nvme_io": false 00:33:37.539 }, 00:33:37.539 "memory_domains": [ 00:33:37.539 { 00:33:37.539 "dma_device_id": "system", 00:33:37.539 "dma_device_type": 1 00:33:37.539 }, 00:33:37.539 { 00:33:37.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.539 "dma_device_type": 2 00:33:37.539 }, 00:33:37.539 { 00:33:37.540 "dma_device_id": "system", 00:33:37.540 "dma_device_type": 1 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.540 "dma_device_type": 2 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "dma_device_id": "system", 00:33:37.540 "dma_device_type": 1 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.540 "dma_device_type": 2 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "dma_device_id": "system", 00:33:37.540 "dma_device_type": 1 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.540 "dma_device_type": 2 00:33:37.540 } 00:33:37.540 ], 00:33:37.540 "driver_specific": { 00:33:37.540 "raid": { 00:33:37.540 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:37.540 "strip_size_kb": 0, 00:33:37.540 "state": "online", 00:33:37.540 "raid_level": "raid1", 00:33:37.540 "superblock": true, 00:33:37.540 "num_base_bdevs": 4, 00:33:37.540 "num_base_bdevs_discovered": 4, 00:33:37.540 "num_base_bdevs_operational": 4, 00:33:37.540 "base_bdevs_list": [ 00:33:37.540 { 00:33:37.540 "name": "BaseBdev1", 00:33:37.540 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:37.540 "is_configured": true, 00:33:37.540 "data_offset": 2048, 00:33:37.540 "data_size": 63488 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "name": "BaseBdev2", 00:33:37.540 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:37.540 "is_configured": true, 00:33:37.540 "data_offset": 2048, 00:33:37.540 "data_size": 63488 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "name": "BaseBdev3", 00:33:37.540 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:37.540 "is_configured": true, 00:33:37.540 "data_offset": 2048, 00:33:37.540 "data_size": 63488 00:33:37.540 }, 00:33:37.540 { 00:33:37.540 "name": "BaseBdev4", 00:33:37.540 "uuid": "b473f577-4263-4330-9385-343cc614f6b0", 00:33:37.540 "is_configured": true, 00:33:37.540 "data_offset": 2048, 00:33:37.540 "data_size": 63488 00:33:37.540 } 00:33:37.540 ] 00:33:37.540 } 00:33:37.540 } 00:33:37.540 }' 00:33:37.540 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:37.540 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:33:37.540 BaseBdev2 00:33:37.540 BaseBdev3 00:33:37.540 BaseBdev4' 00:33:37.540 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:37.540 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:37.540 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:37.798 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:37.798 "name": "BaseBdev1", 00:33:37.798 "aliases": [ 00:33:37.798 "c3fbf0ae-344b-4ab0-84e8-a658b73833a2" 00:33:37.798 ], 00:33:37.798 "product_name": "Malloc disk", 00:33:37.798 "block_size": 512, 00:33:37.798 "num_blocks": 65536, 00:33:37.798 "uuid": "c3fbf0ae-344b-4ab0-84e8-a658b73833a2", 00:33:37.798 "assigned_rate_limits": { 00:33:37.798 "rw_ios_per_sec": 0, 00:33:37.798 "rw_mbytes_per_sec": 0, 00:33:37.798 "r_mbytes_per_sec": 0, 00:33:37.798 "w_mbytes_per_sec": 0 00:33:37.798 }, 00:33:37.798 "claimed": true, 00:33:37.798 "claim_type": "exclusive_write", 00:33:37.798 "zoned": false, 00:33:37.798 "supported_io_types": { 00:33:37.798 "read": true, 00:33:37.798 "write": true, 00:33:37.798 "unmap": true, 00:33:37.798 "write_zeroes": true, 00:33:37.798 "flush": true, 00:33:37.798 "reset": true, 00:33:37.798 "compare": false, 00:33:37.798 "compare_and_write": false, 00:33:37.798 "abort": true, 00:33:37.798 "nvme_admin": false, 00:33:37.798 "nvme_io": false 00:33:37.798 }, 00:33:37.798 "memory_domains": [ 00:33:37.798 { 00:33:37.798 "dma_device_id": "system", 00:33:37.799 "dma_device_type": 1 00:33:37.799 }, 00:33:37.799 { 00:33:37.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.799 "dma_device_type": 2 00:33:37.799 } 00:33:37.799 ], 00:33:37.799 "driver_specific": {} 00:33:37.799 }' 00:33:37.799 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:37.799 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:37.799 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:37.799 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:37.799 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:38.057 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:38.316 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:38.316 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:38.316 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:38.316 "name": "BaseBdev2", 00:33:38.316 "aliases": [ 00:33:38.316 "60ed970f-9e67-4bc1-8615-648309853aef" 00:33:38.316 ], 00:33:38.316 "product_name": "Malloc disk", 00:33:38.316 "block_size": 512, 00:33:38.316 "num_blocks": 65536, 00:33:38.316 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:38.316 "assigned_rate_limits": { 00:33:38.316 "rw_ios_per_sec": 0, 00:33:38.316 "rw_mbytes_per_sec": 0, 00:33:38.316 "r_mbytes_per_sec": 0, 00:33:38.316 "w_mbytes_per_sec": 0 00:33:38.316 }, 00:33:38.316 "claimed": true, 00:33:38.316 "claim_type": "exclusive_write", 00:33:38.316 "zoned": false, 00:33:38.316 "supported_io_types": { 00:33:38.316 "read": true, 00:33:38.316 "write": true, 00:33:38.316 "unmap": true, 00:33:38.316 "write_zeroes": true, 00:33:38.316 "flush": true, 00:33:38.316 "reset": true, 00:33:38.316 "compare": false, 00:33:38.316 "compare_and_write": false, 00:33:38.316 "abort": true, 00:33:38.316 "nvme_admin": false, 00:33:38.316 "nvme_io": false 00:33:38.316 }, 00:33:38.316 "memory_domains": [ 00:33:38.316 { 00:33:38.316 "dma_device_id": "system", 00:33:38.316 "dma_device_type": 1 00:33:38.316 }, 00:33:38.316 { 00:33:38.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.316 "dma_device_type": 2 00:33:38.316 } 00:33:38.316 ], 00:33:38.316 "driver_specific": {} 00:33:38.316 }' 00:33:38.316 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:38.574 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:38.831 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:39.089 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:39.089 "name": "BaseBdev3", 00:33:39.089 "aliases": [ 00:33:39.089 "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a" 00:33:39.089 ], 00:33:39.089 "product_name": "Malloc disk", 00:33:39.089 "block_size": 512, 00:33:39.089 "num_blocks": 65536, 00:33:39.089 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:39.089 "assigned_rate_limits": { 00:33:39.089 "rw_ios_per_sec": 0, 00:33:39.089 "rw_mbytes_per_sec": 0, 00:33:39.089 "r_mbytes_per_sec": 0, 00:33:39.089 "w_mbytes_per_sec": 0 00:33:39.089 }, 00:33:39.089 "claimed": true, 00:33:39.089 "claim_type": "exclusive_write", 00:33:39.089 "zoned": false, 00:33:39.089 "supported_io_types": { 00:33:39.089 "read": true, 00:33:39.089 "write": true, 00:33:39.089 "unmap": true, 00:33:39.089 "write_zeroes": true, 00:33:39.089 "flush": true, 00:33:39.089 "reset": true, 00:33:39.089 "compare": false, 00:33:39.089 "compare_and_write": false, 00:33:39.089 "abort": true, 00:33:39.089 "nvme_admin": false, 00:33:39.089 "nvme_io": false 00:33:39.089 }, 00:33:39.089 "memory_domains": [ 00:33:39.089 { 00:33:39.089 "dma_device_id": "system", 00:33:39.089 "dma_device_type": 1 00:33:39.089 }, 00:33:39.089 { 00:33:39.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.089 "dma_device_type": 2 00:33:39.089 } 00:33:39.089 ], 00:33:39.089 "driver_specific": {} 00:33:39.089 }' 00:33:39.089 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:39.089 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:39.362 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:39.620 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:39.620 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:39.620 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:39.620 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:39.620 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:39.879 "name": "BaseBdev4", 00:33:39.879 "aliases": [ 00:33:39.879 "b473f577-4263-4330-9385-343cc614f6b0" 00:33:39.879 ], 00:33:39.879 "product_name": "Malloc disk", 00:33:39.879 "block_size": 512, 00:33:39.879 "num_blocks": 65536, 00:33:39.879 "uuid": "b473f577-4263-4330-9385-343cc614f6b0", 00:33:39.879 "assigned_rate_limits": { 00:33:39.879 "rw_ios_per_sec": 0, 00:33:39.879 "rw_mbytes_per_sec": 0, 00:33:39.879 "r_mbytes_per_sec": 0, 00:33:39.879 "w_mbytes_per_sec": 0 00:33:39.879 }, 00:33:39.879 "claimed": true, 00:33:39.879 "claim_type": "exclusive_write", 00:33:39.879 "zoned": false, 00:33:39.879 "supported_io_types": { 00:33:39.879 "read": true, 00:33:39.879 "write": true, 00:33:39.879 "unmap": true, 00:33:39.879 "write_zeroes": true, 00:33:39.879 "flush": true, 00:33:39.879 "reset": true, 00:33:39.879 "compare": false, 00:33:39.879 "compare_and_write": false, 00:33:39.879 "abort": true, 00:33:39.879 "nvme_admin": false, 00:33:39.879 "nvme_io": false 00:33:39.879 }, 00:33:39.879 "memory_domains": [ 00:33:39.879 { 00:33:39.879 "dma_device_id": "system", 00:33:39.879 "dma_device_type": 1 00:33:39.879 }, 00:33:39.879 { 00:33:39.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.879 "dma_device_type": 2 00:33:39.879 } 00:33:39.879 ], 00:33:39.879 "driver_specific": {} 00:33:39.879 }' 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:39.879 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:40.137 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:40.472 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:40.472 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:40.472 [2024-05-15 11:26:58.977852] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.472 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.730 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:40.730 "name": "Existed_Raid", 00:33:40.730 "uuid": "1483408a-9ec5-4f2e-9105-b87da4b00ecb", 00:33:40.730 "strip_size_kb": 0, 00:33:40.730 "state": "online", 00:33:40.730 "raid_level": "raid1", 00:33:40.730 "superblock": true, 00:33:40.730 "num_base_bdevs": 4, 00:33:40.730 "num_base_bdevs_discovered": 3, 00:33:40.730 "num_base_bdevs_operational": 3, 00:33:40.730 "base_bdevs_list": [ 00:33:40.730 { 00:33:40.730 "name": null, 00:33:40.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.730 "is_configured": false, 00:33:40.730 "data_offset": 2048, 00:33:40.730 "data_size": 63488 00:33:40.730 }, 00:33:40.730 { 00:33:40.730 "name": "BaseBdev2", 00:33:40.730 "uuid": "60ed970f-9e67-4bc1-8615-648309853aef", 00:33:40.730 "is_configured": true, 00:33:40.730 "data_offset": 2048, 00:33:40.730 "data_size": 63488 00:33:40.730 }, 00:33:40.730 { 00:33:40.730 "name": "BaseBdev3", 00:33:40.730 "uuid": "3cca05ac-ca19-44bb-a4bc-1afcbdb0984a", 00:33:40.730 "is_configured": true, 00:33:40.730 "data_offset": 2048, 00:33:40.730 "data_size": 63488 00:33:40.730 }, 00:33:40.730 { 00:33:40.730 "name": "BaseBdev4", 00:33:40.730 "uuid": "b473f577-4263-4330-9385-343cc614f6b0", 00:33:40.730 "is_configured": true, 00:33:40.730 "data_offset": 2048, 00:33:40.730 "data_size": 63488 00:33:40.730 } 00:33:40.730 ] 00:33:40.730 }' 00:33:40.730 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:40.730 11:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:41.294 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:41.294 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:41.294 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.294 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:41.584 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:41.584 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:41.584 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:41.842 [2024-05-15 11:27:00.343972] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:41.842 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:41.842 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:41.842 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.842 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:42.100 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:42.100 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:42.100 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:42.357 [2024-05-15 11:27:00.906749] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:42.616 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:42.617 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:42.617 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:33:42.617 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.874 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:33:42.874 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:42.874 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:43.132 [2024-05-15 11:27:01.513109] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:43.132 [2024-05-15 11:27:01.513212] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:43.132 [2024-05-15 11:27:01.592223] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:43.132 [2024-05-15 11:27:01.592316] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:43.132 [2024-05-15 11:27:01.592331] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:33:43.132 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:43.132 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:43.132 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.132 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:43.390 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:43.648 BaseBdev2 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:43.648 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:43.906 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:44.164 [ 00:33:44.164 { 00:33:44.164 "name": "BaseBdev2", 00:33:44.164 "aliases": [ 00:33:44.164 "3825767f-10cd-4bf8-99a3-05efcfc58909" 00:33:44.164 ], 00:33:44.164 "product_name": "Malloc disk", 00:33:44.164 "block_size": 512, 00:33:44.164 "num_blocks": 65536, 00:33:44.164 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:44.164 "assigned_rate_limits": { 00:33:44.164 "rw_ios_per_sec": 0, 00:33:44.164 "rw_mbytes_per_sec": 0, 00:33:44.164 "r_mbytes_per_sec": 0, 00:33:44.164 "w_mbytes_per_sec": 0 00:33:44.164 }, 00:33:44.164 "claimed": false, 00:33:44.164 "zoned": false, 00:33:44.164 "supported_io_types": { 00:33:44.164 "read": true, 00:33:44.164 "write": true, 00:33:44.164 "unmap": true, 00:33:44.164 "write_zeroes": true, 00:33:44.164 "flush": true, 00:33:44.164 "reset": true, 00:33:44.164 "compare": false, 00:33:44.164 "compare_and_write": false, 00:33:44.164 "abort": true, 00:33:44.164 "nvme_admin": false, 00:33:44.164 "nvme_io": false 00:33:44.164 }, 00:33:44.164 "memory_domains": [ 00:33:44.164 { 00:33:44.164 "dma_device_id": "system", 00:33:44.164 "dma_device_type": 1 00:33:44.164 }, 00:33:44.164 { 00:33:44.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.164 "dma_device_type": 2 00:33:44.164 } 00:33:44.164 ], 00:33:44.164 "driver_specific": {} 00:33:44.164 } 00:33:44.164 ] 00:33:44.164 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:44.164 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:44.165 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:44.165 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:44.423 BaseBdev3 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:44.423 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:44.689 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:44.689 [ 00:33:44.689 { 00:33:44.689 "name": "BaseBdev3", 00:33:44.689 "aliases": [ 00:33:44.689 "b8443b70-206a-41f3-82f2-9605d026739a" 00:33:44.689 ], 00:33:44.689 "product_name": "Malloc disk", 00:33:44.689 "block_size": 512, 00:33:44.689 "num_blocks": 65536, 00:33:44.689 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:44.689 "assigned_rate_limits": { 00:33:44.689 "rw_ios_per_sec": 0, 00:33:44.689 "rw_mbytes_per_sec": 0, 00:33:44.689 "r_mbytes_per_sec": 0, 00:33:44.689 "w_mbytes_per_sec": 0 00:33:44.689 }, 00:33:44.689 "claimed": false, 00:33:44.689 "zoned": false, 00:33:44.689 "supported_io_types": { 00:33:44.689 "read": true, 00:33:44.689 "write": true, 00:33:44.689 "unmap": true, 00:33:44.689 "write_zeroes": true, 00:33:44.689 "flush": true, 00:33:44.689 "reset": true, 00:33:44.689 "compare": false, 00:33:44.689 "compare_and_write": false, 00:33:44.689 "abort": true, 00:33:44.689 "nvme_admin": false, 00:33:44.689 "nvme_io": false 00:33:44.689 }, 00:33:44.689 "memory_domains": [ 00:33:44.689 { 00:33:44.689 "dma_device_id": "system", 00:33:44.689 "dma_device_type": 1 00:33:44.689 }, 00:33:44.689 { 00:33:44.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.689 "dma_device_type": 2 00:33:44.689 } 00:33:44.689 ], 00:33:44.689 "driver_specific": {} 00:33:44.689 } 00:33:44.689 ] 00:33:44.950 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:44.950 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:44.950 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:44.950 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:45.209 BaseBdev4 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:45.209 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:45.469 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:45.728 [ 00:33:45.728 { 00:33:45.728 "name": "BaseBdev4", 00:33:45.729 "aliases": [ 00:33:45.729 "ab3d74cf-6530-4ff9-91fb-cf7723d721c6" 00:33:45.729 ], 00:33:45.729 "product_name": "Malloc disk", 00:33:45.729 "block_size": 512, 00:33:45.729 "num_blocks": 65536, 00:33:45.729 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:45.729 "assigned_rate_limits": { 00:33:45.729 "rw_ios_per_sec": 0, 00:33:45.729 "rw_mbytes_per_sec": 0, 00:33:45.729 "r_mbytes_per_sec": 0, 00:33:45.729 "w_mbytes_per_sec": 0 00:33:45.729 }, 00:33:45.729 "claimed": false, 00:33:45.729 "zoned": false, 00:33:45.729 "supported_io_types": { 00:33:45.729 "read": true, 00:33:45.729 "write": true, 00:33:45.729 "unmap": true, 00:33:45.729 "write_zeroes": true, 00:33:45.729 "flush": true, 00:33:45.729 "reset": true, 00:33:45.729 "compare": false, 00:33:45.729 "compare_and_write": false, 00:33:45.729 "abort": true, 00:33:45.729 "nvme_admin": false, 00:33:45.729 "nvme_io": false 00:33:45.729 }, 00:33:45.729 "memory_domains": [ 00:33:45.729 { 00:33:45.729 "dma_device_id": "system", 00:33:45.729 "dma_device_type": 1 00:33:45.729 }, 00:33:45.729 { 00:33:45.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:45.729 "dma_device_type": 2 00:33:45.729 } 00:33:45.729 ], 00:33:45.729 "driver_specific": {} 00:33:45.729 } 00:33:45.729 ] 00:33:45.729 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:45.729 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:33:45.729 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:33:45.729 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:45.729 [2024-05-15 11:27:04.352538] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:45.729 [2024-05-15 11:27:04.352659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:45.729 [2024-05-15 11:27:04.352699] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:45.729 [2024-05-15 11:27:04.354755] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:45.729 [2024-05-15 11:27:04.354809] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.988 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:45.988 "name": "Existed_Raid", 00:33:45.988 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:45.988 "strip_size_kb": 0, 00:33:45.988 "state": "configuring", 00:33:45.988 "raid_level": "raid1", 00:33:45.988 "superblock": true, 00:33:45.988 "num_base_bdevs": 4, 00:33:45.988 "num_base_bdevs_discovered": 3, 00:33:45.988 "num_base_bdevs_operational": 4, 00:33:45.988 "base_bdevs_list": [ 00:33:45.988 { 00:33:45.988 "name": "BaseBdev1", 00:33:45.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.989 "is_configured": false, 00:33:45.989 "data_offset": 0, 00:33:45.989 "data_size": 0 00:33:45.989 }, 00:33:45.989 { 00:33:45.989 "name": "BaseBdev2", 00:33:45.989 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:45.989 "is_configured": true, 00:33:45.989 "data_offset": 2048, 00:33:45.989 "data_size": 63488 00:33:45.989 }, 00:33:45.989 { 00:33:45.989 "name": "BaseBdev3", 00:33:45.989 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:45.989 "is_configured": true, 00:33:45.989 "data_offset": 2048, 00:33:45.989 "data_size": 63488 00:33:45.989 }, 00:33:45.989 { 00:33:45.989 "name": "BaseBdev4", 00:33:45.989 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:45.989 "is_configured": true, 00:33:45.989 "data_offset": 2048, 00:33:45.989 "data_size": 63488 00:33:45.989 } 00:33:45.989 ] 00:33:45.989 }' 00:33:45.989 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:45.989 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:46.957 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:47.216 [2024-05-15 11:27:05.628659] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.216 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.474 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:47.474 "name": "Existed_Raid", 00:33:47.474 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:47.475 "strip_size_kb": 0, 00:33:47.475 "state": "configuring", 00:33:47.475 "raid_level": "raid1", 00:33:47.475 "superblock": true, 00:33:47.475 "num_base_bdevs": 4, 00:33:47.475 "num_base_bdevs_discovered": 2, 00:33:47.475 "num_base_bdevs_operational": 4, 00:33:47.475 "base_bdevs_list": [ 00:33:47.475 { 00:33:47.475 "name": "BaseBdev1", 00:33:47.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.475 "is_configured": false, 00:33:47.475 "data_offset": 0, 00:33:47.475 "data_size": 0 00:33:47.475 }, 00:33:47.475 { 00:33:47.475 "name": null, 00:33:47.475 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:47.475 "is_configured": false, 00:33:47.475 "data_offset": 2048, 00:33:47.475 "data_size": 63488 00:33:47.475 }, 00:33:47.475 { 00:33:47.475 "name": "BaseBdev3", 00:33:47.475 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:47.475 "is_configured": true, 00:33:47.475 "data_offset": 2048, 00:33:47.475 "data_size": 63488 00:33:47.475 }, 00:33:47.475 { 00:33:47.475 "name": "BaseBdev4", 00:33:47.475 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:47.475 "is_configured": true, 00:33:47.475 "data_offset": 2048, 00:33:47.475 "data_size": 63488 00:33:47.475 } 00:33:47.475 ] 00:33:47.475 }' 00:33:47.475 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:47.475 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:48.041 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.041 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:48.607 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:33:48.607 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:48.607 [2024-05-15 11:27:07.200233] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:48.607 BaseBdev1 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:48.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:48.866 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:49.434 [ 00:33:49.434 { 00:33:49.434 "name": "BaseBdev1", 00:33:49.434 "aliases": [ 00:33:49.434 "78f7c206-e951-4c5a-9200-6acb0be63d34" 00:33:49.434 ], 00:33:49.434 "product_name": "Malloc disk", 00:33:49.434 "block_size": 512, 00:33:49.434 "num_blocks": 65536, 00:33:49.434 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:49.434 "assigned_rate_limits": { 00:33:49.434 "rw_ios_per_sec": 0, 00:33:49.434 "rw_mbytes_per_sec": 0, 00:33:49.434 "r_mbytes_per_sec": 0, 00:33:49.434 "w_mbytes_per_sec": 0 00:33:49.434 }, 00:33:49.434 "claimed": true, 00:33:49.434 "claim_type": "exclusive_write", 00:33:49.434 "zoned": false, 00:33:49.434 "supported_io_types": { 00:33:49.434 "read": true, 00:33:49.434 "write": true, 00:33:49.434 "unmap": true, 00:33:49.434 "write_zeroes": true, 00:33:49.434 "flush": true, 00:33:49.434 "reset": true, 00:33:49.434 "compare": false, 00:33:49.434 "compare_and_write": false, 00:33:49.434 "abort": true, 00:33:49.434 "nvme_admin": false, 00:33:49.434 "nvme_io": false 00:33:49.434 }, 00:33:49.434 "memory_domains": [ 00:33:49.434 { 00:33:49.434 "dma_device_id": "system", 00:33:49.434 "dma_device_type": 1 00:33:49.434 }, 00:33:49.434 { 00:33:49.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:49.434 "dma_device_type": 2 00:33:49.434 } 00:33:49.434 ], 00:33:49.434 "driver_specific": {} 00:33:49.434 } 00:33:49.434 ] 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.434 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.694 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:49.694 "name": "Existed_Raid", 00:33:49.694 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:49.694 "strip_size_kb": 0, 00:33:49.694 "state": "configuring", 00:33:49.694 "raid_level": "raid1", 00:33:49.694 "superblock": true, 00:33:49.694 "num_base_bdevs": 4, 00:33:49.694 "num_base_bdevs_discovered": 3, 00:33:49.694 "num_base_bdevs_operational": 4, 00:33:49.694 "base_bdevs_list": [ 00:33:49.694 { 00:33:49.694 "name": "BaseBdev1", 00:33:49.694 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:49.694 "is_configured": true, 00:33:49.694 "data_offset": 2048, 00:33:49.694 "data_size": 63488 00:33:49.694 }, 00:33:49.694 { 00:33:49.694 "name": null, 00:33:49.694 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:49.694 "is_configured": false, 00:33:49.694 "data_offset": 2048, 00:33:49.694 "data_size": 63488 00:33:49.694 }, 00:33:49.694 { 00:33:49.694 "name": "BaseBdev3", 00:33:49.694 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:49.694 "is_configured": true, 00:33:49.694 "data_offset": 2048, 00:33:49.694 "data_size": 63488 00:33:49.694 }, 00:33:49.694 { 00:33:49.694 "name": "BaseBdev4", 00:33:49.694 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:49.694 "is_configured": true, 00:33:49.694 "data_offset": 2048, 00:33:49.694 "data_size": 63488 00:33:49.694 } 00:33:49.694 ] 00:33:49.694 }' 00:33:49.694 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:49.694 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.262 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.262 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:50.522 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:50.522 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:50.522 [2024-05-15 11:27:09.104582] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.522 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:50.781 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:50.781 "name": "Existed_Raid", 00:33:50.781 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:50.781 "strip_size_kb": 0, 00:33:50.781 "state": "configuring", 00:33:50.781 "raid_level": "raid1", 00:33:50.781 "superblock": true, 00:33:50.781 "num_base_bdevs": 4, 00:33:50.781 "num_base_bdevs_discovered": 2, 00:33:50.781 "num_base_bdevs_operational": 4, 00:33:50.781 "base_bdevs_list": [ 00:33:50.781 { 00:33:50.781 "name": "BaseBdev1", 00:33:50.781 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:50.781 "is_configured": true, 00:33:50.781 "data_offset": 2048, 00:33:50.781 "data_size": 63488 00:33:50.781 }, 00:33:50.781 { 00:33:50.781 "name": null, 00:33:50.781 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:50.781 "is_configured": false, 00:33:50.781 "data_offset": 2048, 00:33:50.781 "data_size": 63488 00:33:50.781 }, 00:33:50.781 { 00:33:50.781 "name": null, 00:33:50.781 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:50.781 "is_configured": false, 00:33:50.781 "data_offset": 2048, 00:33:50.781 "data_size": 63488 00:33:50.781 }, 00:33:50.781 { 00:33:50.781 "name": "BaseBdev4", 00:33:50.781 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:50.781 "is_configured": true, 00:33:50.781 "data_offset": 2048, 00:33:50.781 "data_size": 63488 00:33:50.781 } 00:33:50.781 ] 00:33:50.781 }' 00:33:50.781 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:50.781 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.394 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.394 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:51.660 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:33:51.660 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:51.919 [2024-05-15 11:27:10.408833] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.919 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.178 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:52.178 "name": "Existed_Raid", 00:33:52.178 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:52.178 "strip_size_kb": 0, 00:33:52.178 "state": "configuring", 00:33:52.178 "raid_level": "raid1", 00:33:52.178 "superblock": true, 00:33:52.178 "num_base_bdevs": 4, 00:33:52.178 "num_base_bdevs_discovered": 3, 00:33:52.178 "num_base_bdevs_operational": 4, 00:33:52.178 "base_bdevs_list": [ 00:33:52.178 { 00:33:52.178 "name": "BaseBdev1", 00:33:52.178 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:52.178 "is_configured": true, 00:33:52.178 "data_offset": 2048, 00:33:52.178 "data_size": 63488 00:33:52.178 }, 00:33:52.178 { 00:33:52.178 "name": null, 00:33:52.178 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:52.178 "is_configured": false, 00:33:52.178 "data_offset": 2048, 00:33:52.178 "data_size": 63488 00:33:52.178 }, 00:33:52.178 { 00:33:52.178 "name": "BaseBdev3", 00:33:52.178 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:52.178 "is_configured": true, 00:33:52.178 "data_offset": 2048, 00:33:52.178 "data_size": 63488 00:33:52.178 }, 00:33:52.178 { 00:33:52.178 "name": "BaseBdev4", 00:33:52.178 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:52.178 "is_configured": true, 00:33:52.178 "data_offset": 2048, 00:33:52.178 "data_size": 63488 00:33:52.178 } 00:33:52.178 ] 00:33:52.178 }' 00:33:52.178 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:52.178 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.746 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.746 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:53.005 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:33:53.005 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:53.264 [2024-05-15 11:27:11.665063] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:53.264 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.522 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:53.522 "name": "Existed_Raid", 00:33:53.522 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:53.522 "strip_size_kb": 0, 00:33:53.522 "state": "configuring", 00:33:53.522 "raid_level": "raid1", 00:33:53.522 "superblock": true, 00:33:53.522 "num_base_bdevs": 4, 00:33:53.522 "num_base_bdevs_discovered": 2, 00:33:53.522 "num_base_bdevs_operational": 4, 00:33:53.522 "base_bdevs_list": [ 00:33:53.522 { 00:33:53.522 "name": null, 00:33:53.522 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:53.522 "is_configured": false, 00:33:53.522 "data_offset": 2048, 00:33:53.522 "data_size": 63488 00:33:53.522 }, 00:33:53.522 { 00:33:53.522 "name": null, 00:33:53.522 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:53.522 "is_configured": false, 00:33:53.522 "data_offset": 2048, 00:33:53.522 "data_size": 63488 00:33:53.522 }, 00:33:53.522 { 00:33:53.522 "name": "BaseBdev3", 00:33:53.522 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:53.522 "is_configured": true, 00:33:53.522 "data_offset": 2048, 00:33:53.522 "data_size": 63488 00:33:53.522 }, 00:33:53.522 { 00:33:53.522 "name": "BaseBdev4", 00:33:53.522 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:53.522 "is_configured": true, 00:33:53.522 "data_offset": 2048, 00:33:53.522 "data_size": 63488 00:33:53.522 } 00:33:53.522 ] 00:33:53.522 }' 00:33:53.522 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:53.522 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.093 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.093 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:54.351 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:33:54.351 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:54.610 [2024-05-15 11:27:13.114176] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.610 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.868 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:54.868 "name": "Existed_Raid", 00:33:54.868 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:54.868 "strip_size_kb": 0, 00:33:54.868 "state": "configuring", 00:33:54.868 "raid_level": "raid1", 00:33:54.868 "superblock": true, 00:33:54.868 "num_base_bdevs": 4, 00:33:54.868 "num_base_bdevs_discovered": 3, 00:33:54.868 "num_base_bdevs_operational": 4, 00:33:54.868 "base_bdevs_list": [ 00:33:54.868 { 00:33:54.868 "name": null, 00:33:54.868 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:54.868 "is_configured": false, 00:33:54.868 "data_offset": 2048, 00:33:54.868 "data_size": 63488 00:33:54.868 }, 00:33:54.868 { 00:33:54.868 "name": "BaseBdev2", 00:33:54.868 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:54.868 "is_configured": true, 00:33:54.868 "data_offset": 2048, 00:33:54.868 "data_size": 63488 00:33:54.868 }, 00:33:54.868 { 00:33:54.868 "name": "BaseBdev3", 00:33:54.868 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:54.868 "is_configured": true, 00:33:54.868 "data_offset": 2048, 00:33:54.868 "data_size": 63488 00:33:54.868 }, 00:33:54.868 { 00:33:54.868 "name": "BaseBdev4", 00:33:54.868 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:54.868 "is_configured": true, 00:33:54.868 "data_offset": 2048, 00:33:54.868 "data_size": 63488 00:33:54.868 } 00:33:54.868 ] 00:33:54.868 }' 00:33:54.868 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:54.868 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.435 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.435 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:55.694 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:33:55.694 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.694 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:55.953 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 78f7c206-e951-4c5a-9200-6acb0be63d34 00:33:56.212 [2024-05-15 11:27:14.662476] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:56.212 [2024-05-15 11:27:14.662671] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:33:56.212 [2024-05-15 11:27:14.662689] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:56.212 [2024-05-15 11:27:14.662776] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:33:56.212 NewBaseBdev 00:33:56.212 [2024-05-15 11:27:14.663290] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:33:56.212 [2024-05-15 11:27:14.663310] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:33:56.212 [2024-05-15 11:27:14.663410] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:56.212 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:56.471 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:56.731 [ 00:33:56.731 { 00:33:56.731 "name": "NewBaseBdev", 00:33:56.731 "aliases": [ 00:33:56.731 "78f7c206-e951-4c5a-9200-6acb0be63d34" 00:33:56.731 ], 00:33:56.731 "product_name": "Malloc disk", 00:33:56.731 "block_size": 512, 00:33:56.731 "num_blocks": 65536, 00:33:56.731 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:56.731 "assigned_rate_limits": { 00:33:56.731 "rw_ios_per_sec": 0, 00:33:56.731 "rw_mbytes_per_sec": 0, 00:33:56.731 "r_mbytes_per_sec": 0, 00:33:56.731 "w_mbytes_per_sec": 0 00:33:56.731 }, 00:33:56.731 "claimed": true, 00:33:56.731 "claim_type": "exclusive_write", 00:33:56.731 "zoned": false, 00:33:56.731 "supported_io_types": { 00:33:56.731 "read": true, 00:33:56.731 "write": true, 00:33:56.731 "unmap": true, 00:33:56.731 "write_zeroes": true, 00:33:56.731 "flush": true, 00:33:56.731 "reset": true, 00:33:56.731 "compare": false, 00:33:56.731 "compare_and_write": false, 00:33:56.731 "abort": true, 00:33:56.731 "nvme_admin": false, 00:33:56.731 "nvme_io": false 00:33:56.731 }, 00:33:56.731 "memory_domains": [ 00:33:56.731 { 00:33:56.731 "dma_device_id": "system", 00:33:56.731 "dma_device_type": 1 00:33:56.731 }, 00:33:56.731 { 00:33:56.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.732 "dma_device_type": 2 00:33:56.732 } 00:33:56.732 ], 00:33:56.732 "driver_specific": {} 00:33:56.732 } 00:33:56.732 ] 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.732 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.997 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:56.997 "name": "Existed_Raid", 00:33:56.997 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:56.997 "strip_size_kb": 0, 00:33:56.997 "state": "online", 00:33:56.997 "raid_level": "raid1", 00:33:56.997 "superblock": true, 00:33:56.997 "num_base_bdevs": 4, 00:33:56.997 "num_base_bdevs_discovered": 4, 00:33:56.997 "num_base_bdevs_operational": 4, 00:33:56.997 "base_bdevs_list": [ 00:33:56.997 { 00:33:56.997 "name": "NewBaseBdev", 00:33:56.997 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:56.997 "is_configured": true, 00:33:56.997 "data_offset": 2048, 00:33:56.997 "data_size": 63488 00:33:56.997 }, 00:33:56.997 { 00:33:56.997 "name": "BaseBdev2", 00:33:56.997 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:56.997 "is_configured": true, 00:33:56.997 "data_offset": 2048, 00:33:56.997 "data_size": 63488 00:33:56.997 }, 00:33:56.997 { 00:33:56.997 "name": "BaseBdev3", 00:33:56.997 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:56.997 "is_configured": true, 00:33:56.997 "data_offset": 2048, 00:33:56.997 "data_size": 63488 00:33:56.997 }, 00:33:56.997 { 00:33:56.997 "name": "BaseBdev4", 00:33:56.997 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:56.997 "is_configured": true, 00:33:56.997 "data_offset": 2048, 00:33:56.997 "data_size": 63488 00:33:56.997 } 00:33:56.997 ] 00:33:56.997 }' 00:33:56.997 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:56.997 11:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:57.564 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:33:57.823 [2024-05-15 11:27:16.254988] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:33:57.823 "name": "Existed_Raid", 00:33:57.823 "aliases": [ 00:33:57.823 "4ca84d9a-3de5-4607-8422-c83926ce10be" 00:33:57.823 ], 00:33:57.823 "product_name": "Raid Volume", 00:33:57.823 "block_size": 512, 00:33:57.823 "num_blocks": 63488, 00:33:57.823 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:57.823 "assigned_rate_limits": { 00:33:57.823 "rw_ios_per_sec": 0, 00:33:57.823 "rw_mbytes_per_sec": 0, 00:33:57.823 "r_mbytes_per_sec": 0, 00:33:57.823 "w_mbytes_per_sec": 0 00:33:57.823 }, 00:33:57.823 "claimed": false, 00:33:57.823 "zoned": false, 00:33:57.823 "supported_io_types": { 00:33:57.823 "read": true, 00:33:57.823 "write": true, 00:33:57.823 "unmap": false, 00:33:57.823 "write_zeroes": true, 00:33:57.823 "flush": false, 00:33:57.823 "reset": true, 00:33:57.823 "compare": false, 00:33:57.823 "compare_and_write": false, 00:33:57.823 "abort": false, 00:33:57.823 "nvme_admin": false, 00:33:57.823 "nvme_io": false 00:33:57.823 }, 00:33:57.823 "memory_domains": [ 00:33:57.823 { 00:33:57.823 "dma_device_id": "system", 00:33:57.823 "dma_device_type": 1 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.823 "dma_device_type": 2 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "system", 00:33:57.823 "dma_device_type": 1 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.823 "dma_device_type": 2 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "system", 00:33:57.823 "dma_device_type": 1 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.823 "dma_device_type": 2 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "system", 00:33:57.823 "dma_device_type": 1 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.823 "dma_device_type": 2 00:33:57.823 } 00:33:57.823 ], 00:33:57.823 "driver_specific": { 00:33:57.823 "raid": { 00:33:57.823 "uuid": "4ca84d9a-3de5-4607-8422-c83926ce10be", 00:33:57.823 "strip_size_kb": 0, 00:33:57.823 "state": "online", 00:33:57.823 "raid_level": "raid1", 00:33:57.823 "superblock": true, 00:33:57.823 "num_base_bdevs": 4, 00:33:57.823 "num_base_bdevs_discovered": 4, 00:33:57.823 "num_base_bdevs_operational": 4, 00:33:57.823 "base_bdevs_list": [ 00:33:57.823 { 00:33:57.823 "name": "NewBaseBdev", 00:33:57.823 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:57.823 "is_configured": true, 00:33:57.823 "data_offset": 2048, 00:33:57.823 "data_size": 63488 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "name": "BaseBdev2", 00:33:57.823 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:57.823 "is_configured": true, 00:33:57.823 "data_offset": 2048, 00:33:57.823 "data_size": 63488 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "name": "BaseBdev3", 00:33:57.823 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:57.823 "is_configured": true, 00:33:57.823 "data_offset": 2048, 00:33:57.823 "data_size": 63488 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "name": "BaseBdev4", 00:33:57.823 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:33:57.823 "is_configured": true, 00:33:57.823 "data_offset": 2048, 00:33:57.823 "data_size": 63488 00:33:57.823 } 00:33:57.823 ] 00:33:57.823 } 00:33:57.823 } 00:33:57.823 }' 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:33:57.823 BaseBdev2 00:33:57.823 BaseBdev3 00:33:57.823 BaseBdev4' 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:57.823 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:58.082 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:58.082 "name": "NewBaseBdev", 00:33:58.082 "aliases": [ 00:33:58.082 "78f7c206-e951-4c5a-9200-6acb0be63d34" 00:33:58.082 ], 00:33:58.082 "product_name": "Malloc disk", 00:33:58.082 "block_size": 512, 00:33:58.082 "num_blocks": 65536, 00:33:58.082 "uuid": "78f7c206-e951-4c5a-9200-6acb0be63d34", 00:33:58.082 "assigned_rate_limits": { 00:33:58.082 "rw_ios_per_sec": 0, 00:33:58.082 "rw_mbytes_per_sec": 0, 00:33:58.082 "r_mbytes_per_sec": 0, 00:33:58.082 "w_mbytes_per_sec": 0 00:33:58.082 }, 00:33:58.082 "claimed": true, 00:33:58.082 "claim_type": "exclusive_write", 00:33:58.082 "zoned": false, 00:33:58.082 "supported_io_types": { 00:33:58.082 "read": true, 00:33:58.082 "write": true, 00:33:58.082 "unmap": true, 00:33:58.082 "write_zeroes": true, 00:33:58.082 "flush": true, 00:33:58.082 "reset": true, 00:33:58.082 "compare": false, 00:33:58.082 "compare_and_write": false, 00:33:58.082 "abort": true, 00:33:58.082 "nvme_admin": false, 00:33:58.082 "nvme_io": false 00:33:58.082 }, 00:33:58.082 "memory_domains": [ 00:33:58.082 { 00:33:58.082 "dma_device_id": "system", 00:33:58.082 "dma_device_type": 1 00:33:58.082 }, 00:33:58.082 { 00:33:58.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:58.082 "dma_device_type": 2 00:33:58.082 } 00:33:58.082 ], 00:33:58.082 "driver_specific": {} 00:33:58.082 }' 00:33:58.082 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:58.082 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:58.082 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:58.082 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:58.341 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:58.599 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:58.599 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:58.599 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:58.599 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:58.599 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:58.858 "name": "BaseBdev2", 00:33:58.858 "aliases": [ 00:33:58.858 "3825767f-10cd-4bf8-99a3-05efcfc58909" 00:33:58.858 ], 00:33:58.858 "product_name": "Malloc disk", 00:33:58.858 "block_size": 512, 00:33:58.858 "num_blocks": 65536, 00:33:58.858 "uuid": "3825767f-10cd-4bf8-99a3-05efcfc58909", 00:33:58.858 "assigned_rate_limits": { 00:33:58.858 "rw_ios_per_sec": 0, 00:33:58.858 "rw_mbytes_per_sec": 0, 00:33:58.858 "r_mbytes_per_sec": 0, 00:33:58.858 "w_mbytes_per_sec": 0 00:33:58.858 }, 00:33:58.858 "claimed": true, 00:33:58.858 "claim_type": "exclusive_write", 00:33:58.858 "zoned": false, 00:33:58.858 "supported_io_types": { 00:33:58.858 "read": true, 00:33:58.858 "write": true, 00:33:58.858 "unmap": true, 00:33:58.858 "write_zeroes": true, 00:33:58.858 "flush": true, 00:33:58.858 "reset": true, 00:33:58.858 "compare": false, 00:33:58.858 "compare_and_write": false, 00:33:58.858 "abort": true, 00:33:58.858 "nvme_admin": false, 00:33:58.858 "nvme_io": false 00:33:58.858 }, 00:33:58.858 "memory_domains": [ 00:33:58.858 { 00:33:58.858 "dma_device_id": "system", 00:33:58.858 "dma_device_type": 1 00:33:58.858 }, 00:33:58.858 { 00:33:58.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:58.858 "dma_device_type": 2 00:33:58.858 } 00:33:58.858 ], 00:33:58.858 "driver_specific": {} 00:33:58.858 }' 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:58.858 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:59.117 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:59.375 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:59.375 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:59.375 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:33:59.375 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:33:59.634 "name": "BaseBdev3", 00:33:59.634 "aliases": [ 00:33:59.634 "b8443b70-206a-41f3-82f2-9605d026739a" 00:33:59.634 ], 00:33:59.634 "product_name": "Malloc disk", 00:33:59.634 "block_size": 512, 00:33:59.634 "num_blocks": 65536, 00:33:59.634 "uuid": "b8443b70-206a-41f3-82f2-9605d026739a", 00:33:59.634 "assigned_rate_limits": { 00:33:59.634 "rw_ios_per_sec": 0, 00:33:59.634 "rw_mbytes_per_sec": 0, 00:33:59.634 "r_mbytes_per_sec": 0, 00:33:59.634 "w_mbytes_per_sec": 0 00:33:59.634 }, 00:33:59.634 "claimed": true, 00:33:59.634 "claim_type": "exclusive_write", 00:33:59.634 "zoned": false, 00:33:59.634 "supported_io_types": { 00:33:59.634 "read": true, 00:33:59.634 "write": true, 00:33:59.634 "unmap": true, 00:33:59.634 "write_zeroes": true, 00:33:59.634 "flush": true, 00:33:59.634 "reset": true, 00:33:59.634 "compare": false, 00:33:59.634 "compare_and_write": false, 00:33:59.634 "abort": true, 00:33:59.634 "nvme_admin": false, 00:33:59.634 "nvme_io": false 00:33:59.634 }, 00:33:59.634 "memory_domains": [ 00:33:59.634 { 00:33:59.634 "dma_device_id": "system", 00:33:59.634 "dma_device_type": 1 00:33:59.634 }, 00:33:59.634 { 00:33:59.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.634 "dma_device_type": 2 00:33:59.634 } 00:33:59.634 ], 00:33:59.634 "driver_specific": {} 00:33:59.634 }' 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:59.634 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:59.894 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:00.152 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:00.152 "name": "BaseBdev4", 00:34:00.152 "aliases": [ 00:34:00.152 "ab3d74cf-6530-4ff9-91fb-cf7723d721c6" 00:34:00.152 ], 00:34:00.152 "product_name": "Malloc disk", 00:34:00.152 "block_size": 512, 00:34:00.152 "num_blocks": 65536, 00:34:00.152 "uuid": "ab3d74cf-6530-4ff9-91fb-cf7723d721c6", 00:34:00.152 "assigned_rate_limits": { 00:34:00.152 "rw_ios_per_sec": 0, 00:34:00.152 "rw_mbytes_per_sec": 0, 00:34:00.152 "r_mbytes_per_sec": 0, 00:34:00.152 "w_mbytes_per_sec": 0 00:34:00.152 }, 00:34:00.152 "claimed": true, 00:34:00.152 "claim_type": "exclusive_write", 00:34:00.152 "zoned": false, 00:34:00.152 "supported_io_types": { 00:34:00.152 "read": true, 00:34:00.152 "write": true, 00:34:00.152 "unmap": true, 00:34:00.152 "write_zeroes": true, 00:34:00.152 "flush": true, 00:34:00.152 "reset": true, 00:34:00.152 "compare": false, 00:34:00.152 "compare_and_write": false, 00:34:00.152 "abort": true, 00:34:00.152 "nvme_admin": false, 00:34:00.152 "nvme_io": false 00:34:00.152 }, 00:34:00.152 "memory_domains": [ 00:34:00.152 { 00:34:00.152 "dma_device_id": "system", 00:34:00.152 "dma_device_type": 1 00:34:00.152 }, 00:34:00.152 { 00:34:00.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.152 "dma_device_type": 2 00:34:00.152 } 00:34:00.152 ], 00:34:00.152 "driver_specific": {} 00:34:00.152 }' 00:34:00.152 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:00.152 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:00.411 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:00.411 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:00.411 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:00.411 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:00.411 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:00.411 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:00.669 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:00.669 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:00.669 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:00.669 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:00.669 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:00.927 [2024-05-15 11:27:19.383238] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:00.927 [2024-05-15 11:27:19.383283] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:00.927 [2024-05-15 11:27:19.383361] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:00.927 [2024-05-15 11:27:19.383607] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:00.927 [2024-05-15 11:27:19.383644] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 71178 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 71178 ']' 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 71178 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71178 00:34:00.927 killing process with pid 71178 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71178' 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 71178 00:34:00.927 11:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 71178 00:34:00.927 [2024-05-15 11:27:19.415084] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:01.185 [2024-05-15 11:27:19.774126] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:02.666 ************************************ 00:34:02.666 END TEST raid_state_function_test_sb 00:34:02.666 ************************************ 00:34:02.666 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:34:02.666 00:34:02.666 real 0m35.806s 00:34:02.666 user 1m7.341s 00:34:02.666 sys 0m3.575s 00:34:02.666 11:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:02.666 11:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.666 11:27:21 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:34:02.666 11:27:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:34:02.666 11:27:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:02.666 11:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:02.666 ************************************ 00:34:02.666 START TEST raid_superblock_test 00:34:02.666 ************************************ 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72297 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72297 /var/tmp/spdk-raid.sock 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 72297 ']' 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:02.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:02.666 11:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.666 [2024-05-15 11:27:21.272017] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:34:02.666 [2024-05-15 11:27:21.272237] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72297 ] 00:34:02.924 [2024-05-15 11:27:21.436955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.183 [2024-05-15 11:27:21.721372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.441 [2024-05-15 11:27:21.967844] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:03.699 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:03.957 malloc1 00:34:03.957 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:04.215 [2024-05-15 11:27:22.649671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:04.215 [2024-05-15 11:27:22.649780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.215 [2024-05-15 11:27:22.650088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:34:04.215 [2024-05-15 11:27:22.650147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.215 pt1 00:34:04.215 [2024-05-15 11:27:22.651872] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.215 [2024-05-15 11:27:22.651912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:04.215 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:04.216 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:04.216 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:04.216 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:04.474 malloc2 00:34:04.474 11:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:04.474 [2024-05-15 11:27:23.107575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:04.474 [2024-05-15 11:27:23.107684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.474 [2024-05-15 11:27:23.107739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:34:04.474 [2024-05-15 11:27:23.107781] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.474 [2024-05-15 11:27:23.109654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.474 [2024-05-15 11:27:23.109710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:04.731 pt2 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:04.731 malloc3 00:34:04.731 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:04.989 [2024-05-15 11:27:23.578093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:04.989 [2024-05-15 11:27:23.578200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.989 [2024-05-15 11:27:23.578258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:34:04.989 [2024-05-15 11:27:23.578305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.989 [2024-05-15 11:27:23.581408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.989 [2024-05-15 11:27:23.581468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:04.989 pt3 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:04.989 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:34:05.247 malloc4 00:34:05.247 11:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:05.506 [2024-05-15 11:27:24.028470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:05.506 [2024-05-15 11:27:24.028569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:05.506 [2024-05-15 11:27:24.028624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:34:05.506 [2024-05-15 11:27:24.028675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:05.506 pt4 00:34:05.506 [2024-05-15 11:27:24.030349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:05.506 [2024-05-15 11:27:24.030398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:05.506 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:05.506 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:05.506 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:34:05.769 [2024-05-15 11:27:24.272582] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:05.769 [2024-05-15 11:27:24.274011] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:05.769 [2024-05-15 11:27:24.274065] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:05.769 [2024-05-15 11:27:24.274103] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:05.769 [2024-05-15 11:27:24.274241] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:34:05.769 [2024-05-15 11:27:24.274256] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:05.769 [2024-05-15 11:27:24.274407] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:34:05.769 [2024-05-15 11:27:24.274674] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:34:05.769 [2024-05-15 11:27:24.274691] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:34:05.769 [2024-05-15 11:27:24.274951] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.769 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:34:05.769 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:05.769 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.770 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.029 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:06.029 "name": "raid_bdev1", 00:34:06.029 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:06.029 "strip_size_kb": 0, 00:34:06.029 "state": "online", 00:34:06.029 "raid_level": "raid1", 00:34:06.029 "superblock": true, 00:34:06.029 "num_base_bdevs": 4, 00:34:06.029 "num_base_bdevs_discovered": 4, 00:34:06.029 "num_base_bdevs_operational": 4, 00:34:06.029 "base_bdevs_list": [ 00:34:06.029 { 00:34:06.029 "name": "pt1", 00:34:06.029 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:06.029 "is_configured": true, 00:34:06.029 "data_offset": 2048, 00:34:06.029 "data_size": 63488 00:34:06.029 }, 00:34:06.029 { 00:34:06.029 "name": "pt2", 00:34:06.029 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:06.029 "is_configured": true, 00:34:06.029 "data_offset": 2048, 00:34:06.029 "data_size": 63488 00:34:06.029 }, 00:34:06.029 { 00:34:06.029 "name": "pt3", 00:34:06.029 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:06.029 "is_configured": true, 00:34:06.029 "data_offset": 2048, 00:34:06.029 "data_size": 63488 00:34:06.029 }, 00:34:06.029 { 00:34:06.029 "name": "pt4", 00:34:06.029 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:06.029 "is_configured": true, 00:34:06.029 "data_offset": 2048, 00:34:06.029 "data_size": 63488 00:34:06.029 } 00:34:06.029 ] 00:34:06.029 }' 00:34:06.029 11:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:06.029 11:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:06.627 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:34:06.918 [2024-05-15 11:27:25.376847] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:06.918 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:34:06.918 "name": "raid_bdev1", 00:34:06.918 "aliases": [ 00:34:06.918 "446bd8bf-00aa-4a1b-96ec-c440ff71b591" 00:34:06.918 ], 00:34:06.918 "product_name": "Raid Volume", 00:34:06.918 "block_size": 512, 00:34:06.918 "num_blocks": 63488, 00:34:06.918 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:06.918 "assigned_rate_limits": { 00:34:06.918 "rw_ios_per_sec": 0, 00:34:06.918 "rw_mbytes_per_sec": 0, 00:34:06.918 "r_mbytes_per_sec": 0, 00:34:06.918 "w_mbytes_per_sec": 0 00:34:06.918 }, 00:34:06.918 "claimed": false, 00:34:06.918 "zoned": false, 00:34:06.919 "supported_io_types": { 00:34:06.919 "read": true, 00:34:06.919 "write": true, 00:34:06.919 "unmap": false, 00:34:06.919 "write_zeroes": true, 00:34:06.919 "flush": false, 00:34:06.919 "reset": true, 00:34:06.919 "compare": false, 00:34:06.919 "compare_and_write": false, 00:34:06.919 "abort": false, 00:34:06.919 "nvme_admin": false, 00:34:06.919 "nvme_io": false 00:34:06.919 }, 00:34:06.919 "memory_domains": [ 00:34:06.919 { 00:34:06.919 "dma_device_id": "system", 00:34:06.919 "dma_device_type": 1 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.919 "dma_device_type": 2 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "system", 00:34:06.919 "dma_device_type": 1 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.919 "dma_device_type": 2 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "system", 00:34:06.919 "dma_device_type": 1 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.919 "dma_device_type": 2 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "system", 00:34:06.919 "dma_device_type": 1 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.919 "dma_device_type": 2 00:34:06.919 } 00:34:06.919 ], 00:34:06.919 "driver_specific": { 00:34:06.919 "raid": { 00:34:06.919 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:06.919 "strip_size_kb": 0, 00:34:06.919 "state": "online", 00:34:06.919 "raid_level": "raid1", 00:34:06.919 "superblock": true, 00:34:06.919 "num_base_bdevs": 4, 00:34:06.919 "num_base_bdevs_discovered": 4, 00:34:06.919 "num_base_bdevs_operational": 4, 00:34:06.919 "base_bdevs_list": [ 00:34:06.919 { 00:34:06.919 "name": "pt1", 00:34:06.919 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:06.919 "is_configured": true, 00:34:06.919 "data_offset": 2048, 00:34:06.919 "data_size": 63488 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "name": "pt2", 00:34:06.919 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:06.919 "is_configured": true, 00:34:06.919 "data_offset": 2048, 00:34:06.919 "data_size": 63488 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "name": "pt3", 00:34:06.919 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:06.919 "is_configured": true, 00:34:06.919 "data_offset": 2048, 00:34:06.919 "data_size": 63488 00:34:06.919 }, 00:34:06.919 { 00:34:06.919 "name": "pt4", 00:34:06.919 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:06.919 "is_configured": true, 00:34:06.919 "data_offset": 2048, 00:34:06.919 "data_size": 63488 00:34:06.919 } 00:34:06.919 ] 00:34:06.919 } 00:34:06.919 } 00:34:06.919 }' 00:34:06.919 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:06.919 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:34:06.919 pt2 00:34:06.919 pt3 00:34:06.919 pt4' 00:34:06.919 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:06.919 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:06.919 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:07.179 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:07.179 "name": "pt1", 00:34:07.179 "aliases": [ 00:34:07.179 "31c79fa6-ab08-5528-ba28-ec7b624c241a" 00:34:07.179 ], 00:34:07.179 "product_name": "passthru", 00:34:07.179 "block_size": 512, 00:34:07.179 "num_blocks": 65536, 00:34:07.179 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:07.179 "assigned_rate_limits": { 00:34:07.179 "rw_ios_per_sec": 0, 00:34:07.179 "rw_mbytes_per_sec": 0, 00:34:07.179 "r_mbytes_per_sec": 0, 00:34:07.179 "w_mbytes_per_sec": 0 00:34:07.179 }, 00:34:07.179 "claimed": true, 00:34:07.179 "claim_type": "exclusive_write", 00:34:07.179 "zoned": false, 00:34:07.179 "supported_io_types": { 00:34:07.179 "read": true, 00:34:07.179 "write": true, 00:34:07.179 "unmap": true, 00:34:07.179 "write_zeroes": true, 00:34:07.179 "flush": true, 00:34:07.179 "reset": true, 00:34:07.179 "compare": false, 00:34:07.179 "compare_and_write": false, 00:34:07.179 "abort": true, 00:34:07.179 "nvme_admin": false, 00:34:07.179 "nvme_io": false 00:34:07.179 }, 00:34:07.179 "memory_domains": [ 00:34:07.179 { 00:34:07.179 "dma_device_id": "system", 00:34:07.179 "dma_device_type": 1 00:34:07.179 }, 00:34:07.179 { 00:34:07.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.179 "dma_device_type": 2 00:34:07.179 } 00:34:07.179 ], 00:34:07.179 "driver_specific": { 00:34:07.179 "passthru": { 00:34:07.179 "name": "pt1", 00:34:07.179 "base_bdev_name": "malloc1" 00:34:07.179 } 00:34:07.179 } 00:34:07.179 }' 00:34:07.179 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:07.179 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:07.179 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:07.179 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:07.438 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:07.438 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:07.438 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:07.438 11:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:07.438 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:07.438 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:07.696 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:07.696 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:07.696 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:07.696 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:07.696 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:07.956 "name": "pt2", 00:34:07.956 "aliases": [ 00:34:07.956 "822e9695-3c0a-5608-b4e3-01ecbadbbc5c" 00:34:07.956 ], 00:34:07.956 "product_name": "passthru", 00:34:07.956 "block_size": 512, 00:34:07.956 "num_blocks": 65536, 00:34:07.956 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:07.956 "assigned_rate_limits": { 00:34:07.956 "rw_ios_per_sec": 0, 00:34:07.956 "rw_mbytes_per_sec": 0, 00:34:07.956 "r_mbytes_per_sec": 0, 00:34:07.956 "w_mbytes_per_sec": 0 00:34:07.956 }, 00:34:07.956 "claimed": true, 00:34:07.956 "claim_type": "exclusive_write", 00:34:07.956 "zoned": false, 00:34:07.956 "supported_io_types": { 00:34:07.956 "read": true, 00:34:07.956 "write": true, 00:34:07.956 "unmap": true, 00:34:07.956 "write_zeroes": true, 00:34:07.956 "flush": true, 00:34:07.956 "reset": true, 00:34:07.956 "compare": false, 00:34:07.956 "compare_and_write": false, 00:34:07.956 "abort": true, 00:34:07.956 "nvme_admin": false, 00:34:07.956 "nvme_io": false 00:34:07.956 }, 00:34:07.956 "memory_domains": [ 00:34:07.956 { 00:34:07.956 "dma_device_id": "system", 00:34:07.956 "dma_device_type": 1 00:34:07.956 }, 00:34:07.956 { 00:34:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.956 "dma_device_type": 2 00:34:07.956 } 00:34:07.956 ], 00:34:07.956 "driver_specific": { 00:34:07.956 "passthru": { 00:34:07.956 "name": "pt2", 00:34:07.956 "base_bdev_name": "malloc2" 00:34:07.956 } 00:34:07.956 } 00:34:07.956 }' 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:07.956 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:08.214 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:08.473 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:08.473 11:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:08.473 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:08.473 "name": "pt3", 00:34:08.473 "aliases": [ 00:34:08.473 "3e8d39d9-a118-523c-ba4d-bcb835183a91" 00:34:08.473 ], 00:34:08.473 "product_name": "passthru", 00:34:08.473 "block_size": 512, 00:34:08.473 "num_blocks": 65536, 00:34:08.473 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:08.473 "assigned_rate_limits": { 00:34:08.473 "rw_ios_per_sec": 0, 00:34:08.473 "rw_mbytes_per_sec": 0, 00:34:08.473 "r_mbytes_per_sec": 0, 00:34:08.473 "w_mbytes_per_sec": 0 00:34:08.473 }, 00:34:08.473 "claimed": true, 00:34:08.473 "claim_type": "exclusive_write", 00:34:08.473 "zoned": false, 00:34:08.473 "supported_io_types": { 00:34:08.473 "read": true, 00:34:08.473 "write": true, 00:34:08.473 "unmap": true, 00:34:08.473 "write_zeroes": true, 00:34:08.473 "flush": true, 00:34:08.473 "reset": true, 00:34:08.473 "compare": false, 00:34:08.473 "compare_and_write": false, 00:34:08.473 "abort": true, 00:34:08.473 "nvme_admin": false, 00:34:08.473 "nvme_io": false 00:34:08.473 }, 00:34:08.473 "memory_domains": [ 00:34:08.473 { 00:34:08.473 "dma_device_id": "system", 00:34:08.473 "dma_device_type": 1 00:34:08.473 }, 00:34:08.473 { 00:34:08.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:08.473 "dma_device_type": 2 00:34:08.473 } 00:34:08.473 ], 00:34:08.473 "driver_specific": { 00:34:08.473 "passthru": { 00:34:08.473 "name": "pt3", 00:34:08.473 "base_bdev_name": "malloc3" 00:34:08.473 } 00:34:08.473 } 00:34:08.473 }' 00:34:08.473 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:08.473 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:08.731 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:08.989 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:09.247 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:09.247 "name": "pt4", 00:34:09.247 "aliases": [ 00:34:09.247 "c8543650-cd26-55ba-a1b1-1a030e344398" 00:34:09.247 ], 00:34:09.247 "product_name": "passthru", 00:34:09.247 "block_size": 512, 00:34:09.247 "num_blocks": 65536, 00:34:09.247 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:09.247 "assigned_rate_limits": { 00:34:09.247 "rw_ios_per_sec": 0, 00:34:09.247 "rw_mbytes_per_sec": 0, 00:34:09.247 "r_mbytes_per_sec": 0, 00:34:09.247 "w_mbytes_per_sec": 0 00:34:09.247 }, 00:34:09.247 "claimed": true, 00:34:09.247 "claim_type": "exclusive_write", 00:34:09.247 "zoned": false, 00:34:09.247 "supported_io_types": { 00:34:09.247 "read": true, 00:34:09.247 "write": true, 00:34:09.247 "unmap": true, 00:34:09.247 "write_zeroes": true, 00:34:09.247 "flush": true, 00:34:09.247 "reset": true, 00:34:09.247 "compare": false, 00:34:09.247 "compare_and_write": false, 00:34:09.247 "abort": true, 00:34:09.247 "nvme_admin": false, 00:34:09.247 "nvme_io": false 00:34:09.247 }, 00:34:09.247 "memory_domains": [ 00:34:09.247 { 00:34:09.247 "dma_device_id": "system", 00:34:09.247 "dma_device_type": 1 00:34:09.247 }, 00:34:09.247 { 00:34:09.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:09.247 "dma_device_type": 2 00:34:09.247 } 00:34:09.247 ], 00:34:09.247 "driver_specific": { 00:34:09.247 "passthru": { 00:34:09.247 "name": "pt4", 00:34:09.247 "base_bdev_name": "malloc4" 00:34:09.247 } 00:34:09.247 } 00:34:09.247 }' 00:34:09.247 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:09.247 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:09.247 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:09.247 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:09.505 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:09.505 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:09.505 11:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:09.505 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:09.505 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:09.505 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:09.763 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:09.763 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:09.763 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:09.763 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:10.019 [2024-05-15 11:27:28.402220] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:10.019 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=446bd8bf-00aa-4a1b-96ec-c440ff71b591 00:34:10.019 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 446bd8bf-00aa-4a1b-96ec-c440ff71b591 ']' 00:34:10.019 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:10.019 [2024-05-15 11:27:28.646043] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:10.019 [2024-05-15 11:27:28.646089] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:10.019 [2024-05-15 11:27:28.646169] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:10.019 [2024-05-15 11:27:28.646249] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:10.019 [2024-05-15 11:27:28.646261] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:10.276 11:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:10.534 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:10.534 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:10.792 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:10.792 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:11.052 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:11.052 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:11.310 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:11.310 11:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:11.568 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:11.826 [2024-05-15 11:27:30.242250] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:11.826 [2024-05-15 11:27:30.244016] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:11.826 [2024-05-15 11:27:30.244066] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:11.826 [2024-05-15 11:27:30.244110] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:34:11.826 [2024-05-15 11:27:30.244148] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:11.826 [2024-05-15 11:27:30.244244] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:11.826 [2024-05-15 11:27:30.244280] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:11.826 [2024-05-15 11:27:30.244331] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:34:11.826 [2024-05-15 11:27:30.244357] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:11.826 [2024-05-15 11:27:30.244368] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:34:11.826 request: 00:34:11.826 { 00:34:11.826 "name": "raid_bdev1", 00:34:11.826 "raid_level": "raid1", 00:34:11.826 "base_bdevs": [ 00:34:11.826 "malloc1", 00:34:11.826 "malloc2", 00:34:11.826 "malloc3", 00:34:11.826 "malloc4" 00:34:11.826 ], 00:34:11.826 "superblock": false, 00:34:11.826 "method": "bdev_raid_create", 00:34:11.826 "req_id": 1 00:34:11.826 } 00:34:11.826 Got JSON-RPC error response 00:34:11.826 response: 00:34:11.826 { 00:34:11.826 "code": -17, 00:34:11.826 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:11.826 } 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.826 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:12.084 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:12.084 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:12.084 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:12.342 [2024-05-15 11:27:30.738240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:12.342 [2024-05-15 11:27:30.738354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.342 [2024-05-15 11:27:30.738406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:34:12.342 [2024-05-15 11:27:30.738464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.342 [2024-05-15 11:27:30.740485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.342 [2024-05-15 11:27:30.740545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:12.342 [2024-05-15 11:27:30.740643] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:12.342 [2024-05-15 11:27:30.740713] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:12.342 pt1 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:12.342 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:12.343 "name": "raid_bdev1", 00:34:12.343 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:12.343 "strip_size_kb": 0, 00:34:12.343 "state": "configuring", 00:34:12.343 "raid_level": "raid1", 00:34:12.343 "superblock": true, 00:34:12.343 "num_base_bdevs": 4, 00:34:12.343 "num_base_bdevs_discovered": 1, 00:34:12.343 "num_base_bdevs_operational": 4, 00:34:12.343 "base_bdevs_list": [ 00:34:12.343 { 00:34:12.343 "name": "pt1", 00:34:12.343 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:12.343 "is_configured": true, 00:34:12.343 "data_offset": 2048, 00:34:12.343 "data_size": 63488 00:34:12.343 }, 00:34:12.343 { 00:34:12.343 "name": null, 00:34:12.343 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:12.343 "is_configured": false, 00:34:12.343 "data_offset": 2048, 00:34:12.343 "data_size": 63488 00:34:12.343 }, 00:34:12.343 { 00:34:12.343 "name": null, 00:34:12.343 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:12.343 "is_configured": false, 00:34:12.343 "data_offset": 2048, 00:34:12.343 "data_size": 63488 00:34:12.343 }, 00:34:12.343 { 00:34:12.343 "name": null, 00:34:12.343 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:12.343 "is_configured": false, 00:34:12.343 "data_offset": 2048, 00:34:12.343 "data_size": 63488 00:34:12.343 } 00:34:12.343 ] 00:34:12.343 }' 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:12.343 11:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.275 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:34:13.275 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:13.532 [2024-05-15 11:27:31.914409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:13.532 [2024-05-15 11:27:31.914516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.532 [2024-05-15 11:27:31.914568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:34:13.532 [2024-05-15 11:27:31.914592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.532 [2024-05-15 11:27:31.915151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.532 [2024-05-15 11:27:31.915208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:13.532 [2024-05-15 11:27:31.915304] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:13.532 [2024-05-15 11:27:31.915334] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:13.532 pt2 00:34:13.532 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:13.532 [2024-05-15 11:27:32.130476] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.532 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.790 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:13.790 "name": "raid_bdev1", 00:34:13.790 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:13.790 "strip_size_kb": 0, 00:34:13.790 "state": "configuring", 00:34:13.790 "raid_level": "raid1", 00:34:13.790 "superblock": true, 00:34:13.790 "num_base_bdevs": 4, 00:34:13.790 "num_base_bdevs_discovered": 1, 00:34:13.790 "num_base_bdevs_operational": 4, 00:34:13.790 "base_bdevs_list": [ 00:34:13.790 { 00:34:13.790 "name": "pt1", 00:34:13.790 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:13.790 "is_configured": true, 00:34:13.790 "data_offset": 2048, 00:34:13.790 "data_size": 63488 00:34:13.790 }, 00:34:13.790 { 00:34:13.790 "name": null, 00:34:13.790 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:13.790 "is_configured": false, 00:34:13.790 "data_offset": 2048, 00:34:13.790 "data_size": 63488 00:34:13.790 }, 00:34:13.790 { 00:34:13.790 "name": null, 00:34:13.790 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:13.790 "is_configured": false, 00:34:13.790 "data_offset": 2048, 00:34:13.790 "data_size": 63488 00:34:13.790 }, 00:34:13.790 { 00:34:13.790 "name": null, 00:34:13.790 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:13.790 "is_configured": false, 00:34:13.790 "data_offset": 2048, 00:34:13.790 "data_size": 63488 00:34:13.790 } 00:34:13.790 ] 00:34:13.790 }' 00:34:13.790 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:13.790 11:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.720 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:14.720 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:14.720 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:14.720 [2024-05-15 11:27:33.170563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:14.720 [2024-05-15 11:27:33.170654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.720 [2024-05-15 11:27:33.170704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:34:14.720 [2024-05-15 11:27:33.170734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.720 [2024-05-15 11:27:33.171343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.720 [2024-05-15 11:27:33.171400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:14.720 [2024-05-15 11:27:33.171489] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:14.720 [2024-05-15 11:27:33.171517] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:14.720 pt2 00:34:14.720 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:14.720 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:14.720 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:14.977 [2024-05-15 11:27:33.406584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:14.977 [2024-05-15 11:27:33.406672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.977 [2024-05-15 11:27:33.406721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:34:14.977 [2024-05-15 11:27:33.406751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.977 pt3 00:34:14.977 [2024-05-15 11:27:33.407683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.977 [2024-05-15 11:27:33.407768] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:14.977 [2024-05-15 11:27:33.407897] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:14.977 [2024-05-15 11:27:33.407932] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:14.977 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:14.977 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:14.977 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:15.236 [2024-05-15 11:27:33.618668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:15.236 [2024-05-15 11:27:33.618818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:15.236 [2024-05-15 11:27:33.619102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:34:15.236 [2024-05-15 11:27:33.619146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:15.236 [2024-05-15 11:27:33.619524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:15.236 [2024-05-15 11:27:33.619589] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:15.236 [2024-05-15 11:27:33.619716] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:34:15.236 [2024-05-15 11:27:33.619746] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:15.236 [2024-05-15 11:27:33.619887] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:34:15.236 [2024-05-15 11:27:33.619906] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:15.236 [2024-05-15 11:27:33.619992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:15.236 [2024-05-15 11:27:33.620248] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:34:15.236 [2024-05-15 11:27:33.620265] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:34:15.236 [2024-05-15 11:27:33.620362] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:15.236 pt4 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.236 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.495 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:15.495 "name": "raid_bdev1", 00:34:15.495 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:15.495 "strip_size_kb": 0, 00:34:15.495 "state": "online", 00:34:15.495 "raid_level": "raid1", 00:34:15.495 "superblock": true, 00:34:15.495 "num_base_bdevs": 4, 00:34:15.495 "num_base_bdevs_discovered": 4, 00:34:15.495 "num_base_bdevs_operational": 4, 00:34:15.495 "base_bdevs_list": [ 00:34:15.495 { 00:34:15.495 "name": "pt1", 00:34:15.495 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:15.495 "is_configured": true, 00:34:15.495 "data_offset": 2048, 00:34:15.495 "data_size": 63488 00:34:15.495 }, 00:34:15.495 { 00:34:15.495 "name": "pt2", 00:34:15.495 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:15.495 "is_configured": true, 00:34:15.495 "data_offset": 2048, 00:34:15.495 "data_size": 63488 00:34:15.495 }, 00:34:15.495 { 00:34:15.495 "name": "pt3", 00:34:15.495 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:15.495 "is_configured": true, 00:34:15.495 "data_offset": 2048, 00:34:15.495 "data_size": 63488 00:34:15.495 }, 00:34:15.495 { 00:34:15.495 "name": "pt4", 00:34:15.495 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:15.495 "is_configured": true, 00:34:15.495 "data_offset": 2048, 00:34:15.495 "data_size": 63488 00:34:15.495 } 00:34:15.495 ] 00:34:15.495 }' 00:34:15.495 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:15.495 11:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:34:16.062 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:16.321 [2024-05-15 11:27:34.790995] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:16.321 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:34:16.321 "name": "raid_bdev1", 00:34:16.321 "aliases": [ 00:34:16.321 "446bd8bf-00aa-4a1b-96ec-c440ff71b591" 00:34:16.321 ], 00:34:16.321 "product_name": "Raid Volume", 00:34:16.321 "block_size": 512, 00:34:16.321 "num_blocks": 63488, 00:34:16.321 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:16.321 "assigned_rate_limits": { 00:34:16.321 "rw_ios_per_sec": 0, 00:34:16.321 "rw_mbytes_per_sec": 0, 00:34:16.321 "r_mbytes_per_sec": 0, 00:34:16.321 "w_mbytes_per_sec": 0 00:34:16.321 }, 00:34:16.321 "claimed": false, 00:34:16.321 "zoned": false, 00:34:16.321 "supported_io_types": { 00:34:16.321 "read": true, 00:34:16.321 "write": true, 00:34:16.321 "unmap": false, 00:34:16.321 "write_zeroes": true, 00:34:16.321 "flush": false, 00:34:16.321 "reset": true, 00:34:16.321 "compare": false, 00:34:16.321 "compare_and_write": false, 00:34:16.321 "abort": false, 00:34:16.321 "nvme_admin": false, 00:34:16.321 "nvme_io": false 00:34:16.321 }, 00:34:16.321 "memory_domains": [ 00:34:16.321 { 00:34:16.321 "dma_device_id": "system", 00:34:16.321 "dma_device_type": 1 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.321 "dma_device_type": 2 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "system", 00:34:16.321 "dma_device_type": 1 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.321 "dma_device_type": 2 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "system", 00:34:16.321 "dma_device_type": 1 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.321 "dma_device_type": 2 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "system", 00:34:16.321 "dma_device_type": 1 00:34:16.321 }, 00:34:16.321 { 00:34:16.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.321 "dma_device_type": 2 00:34:16.321 } 00:34:16.321 ], 00:34:16.321 "driver_specific": { 00:34:16.321 "raid": { 00:34:16.321 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:16.321 "strip_size_kb": 0, 00:34:16.321 "state": "online", 00:34:16.321 "raid_level": "raid1", 00:34:16.321 "superblock": true, 00:34:16.321 "num_base_bdevs": 4, 00:34:16.321 "num_base_bdevs_discovered": 4, 00:34:16.321 "num_base_bdevs_operational": 4, 00:34:16.321 "base_bdevs_list": [ 00:34:16.321 { 00:34:16.321 "name": "pt1", 00:34:16.322 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:16.322 "is_configured": true, 00:34:16.322 "data_offset": 2048, 00:34:16.322 "data_size": 63488 00:34:16.322 }, 00:34:16.322 { 00:34:16.322 "name": "pt2", 00:34:16.322 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:16.322 "is_configured": true, 00:34:16.322 "data_offset": 2048, 00:34:16.322 "data_size": 63488 00:34:16.322 }, 00:34:16.322 { 00:34:16.322 "name": "pt3", 00:34:16.322 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:16.322 "is_configured": true, 00:34:16.322 "data_offset": 2048, 00:34:16.322 "data_size": 63488 00:34:16.322 }, 00:34:16.322 { 00:34:16.322 "name": "pt4", 00:34:16.322 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:16.322 "is_configured": true, 00:34:16.322 "data_offset": 2048, 00:34:16.322 "data_size": 63488 00:34:16.322 } 00:34:16.322 ] 00:34:16.322 } 00:34:16.322 } 00:34:16.322 }' 00:34:16.322 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:16.322 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:34:16.322 pt2 00:34:16.322 pt3 00:34:16.322 pt4' 00:34:16.322 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:16.322 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:16.322 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:16.580 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:16.580 "name": "pt1", 00:34:16.580 "aliases": [ 00:34:16.580 "31c79fa6-ab08-5528-ba28-ec7b624c241a" 00:34:16.580 ], 00:34:16.580 "product_name": "passthru", 00:34:16.580 "block_size": 512, 00:34:16.580 "num_blocks": 65536, 00:34:16.580 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:16.580 "assigned_rate_limits": { 00:34:16.580 "rw_ios_per_sec": 0, 00:34:16.580 "rw_mbytes_per_sec": 0, 00:34:16.580 "r_mbytes_per_sec": 0, 00:34:16.580 "w_mbytes_per_sec": 0 00:34:16.580 }, 00:34:16.580 "claimed": true, 00:34:16.580 "claim_type": "exclusive_write", 00:34:16.580 "zoned": false, 00:34:16.580 "supported_io_types": { 00:34:16.580 "read": true, 00:34:16.580 "write": true, 00:34:16.580 "unmap": true, 00:34:16.580 "write_zeroes": true, 00:34:16.580 "flush": true, 00:34:16.580 "reset": true, 00:34:16.580 "compare": false, 00:34:16.580 "compare_and_write": false, 00:34:16.580 "abort": true, 00:34:16.580 "nvme_admin": false, 00:34:16.580 "nvme_io": false 00:34:16.580 }, 00:34:16.580 "memory_domains": [ 00:34:16.580 { 00:34:16.580 "dma_device_id": "system", 00:34:16.580 "dma_device_type": 1 00:34:16.580 }, 00:34:16.580 { 00:34:16.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.580 "dma_device_type": 2 00:34:16.580 } 00:34:16.580 ], 00:34:16.580 "driver_specific": { 00:34:16.580 "passthru": { 00:34:16.580 "name": "pt1", 00:34:16.580 "base_bdev_name": "malloc1" 00:34:16.580 } 00:34:16.580 } 00:34:16.580 }' 00:34:16.580 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:16.581 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.839 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:17.097 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:17.097 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:17.097 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:17.097 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:17.097 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:17.355 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:17.355 "name": "pt2", 00:34:17.355 "aliases": [ 00:34:17.355 "822e9695-3c0a-5608-b4e3-01ecbadbbc5c" 00:34:17.355 ], 00:34:17.355 "product_name": "passthru", 00:34:17.355 "block_size": 512, 00:34:17.355 "num_blocks": 65536, 00:34:17.355 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:17.355 "assigned_rate_limits": { 00:34:17.355 "rw_ios_per_sec": 0, 00:34:17.355 "rw_mbytes_per_sec": 0, 00:34:17.355 "r_mbytes_per_sec": 0, 00:34:17.355 "w_mbytes_per_sec": 0 00:34:17.355 }, 00:34:17.355 "claimed": true, 00:34:17.355 "claim_type": "exclusive_write", 00:34:17.355 "zoned": false, 00:34:17.355 "supported_io_types": { 00:34:17.355 "read": true, 00:34:17.355 "write": true, 00:34:17.355 "unmap": true, 00:34:17.355 "write_zeroes": true, 00:34:17.355 "flush": true, 00:34:17.355 "reset": true, 00:34:17.355 "compare": false, 00:34:17.355 "compare_and_write": false, 00:34:17.355 "abort": true, 00:34:17.355 "nvme_admin": false, 00:34:17.355 "nvme_io": false 00:34:17.355 }, 00:34:17.355 "memory_domains": [ 00:34:17.355 { 00:34:17.355 "dma_device_id": "system", 00:34:17.355 "dma_device_type": 1 00:34:17.355 }, 00:34:17.355 { 00:34:17.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.355 "dma_device_type": 2 00:34:17.355 } 00:34:17.355 ], 00:34:17.355 "driver_specific": { 00:34:17.355 "passthru": { 00:34:17.356 "name": "pt2", 00:34:17.356 "base_bdev_name": "malloc2" 00:34:17.356 } 00:34:17.356 } 00:34:17.356 }' 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:17.356 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:17.614 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:17.873 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:17.873 "name": "pt3", 00:34:17.873 "aliases": [ 00:34:17.873 "3e8d39d9-a118-523c-ba4d-bcb835183a91" 00:34:17.873 ], 00:34:17.873 "product_name": "passthru", 00:34:17.873 "block_size": 512, 00:34:17.873 "num_blocks": 65536, 00:34:17.873 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:17.873 "assigned_rate_limits": { 00:34:17.873 "rw_ios_per_sec": 0, 00:34:17.873 "rw_mbytes_per_sec": 0, 00:34:17.873 "r_mbytes_per_sec": 0, 00:34:17.873 "w_mbytes_per_sec": 0 00:34:17.873 }, 00:34:17.873 "claimed": true, 00:34:17.873 "claim_type": "exclusive_write", 00:34:17.873 "zoned": false, 00:34:17.873 "supported_io_types": { 00:34:17.873 "read": true, 00:34:17.873 "write": true, 00:34:17.873 "unmap": true, 00:34:17.873 "write_zeroes": true, 00:34:17.873 "flush": true, 00:34:17.873 "reset": true, 00:34:17.873 "compare": false, 00:34:17.873 "compare_and_write": false, 00:34:17.873 "abort": true, 00:34:17.873 "nvme_admin": false, 00:34:17.873 "nvme_io": false 00:34:17.873 }, 00:34:17.873 "memory_domains": [ 00:34:17.873 { 00:34:17.873 "dma_device_id": "system", 00:34:17.873 "dma_device_type": 1 00:34:17.873 }, 00:34:17.873 { 00:34:17.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.873 "dma_device_type": 2 00:34:17.873 } 00:34:17.873 ], 00:34:17.873 "driver_specific": { 00:34:17.873 "passthru": { 00:34:17.873 "name": "pt3", 00:34:17.873 "base_bdev_name": "malloc3" 00:34:17.873 } 00:34:17.873 } 00:34:17.873 }' 00:34:17.873 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:18.131 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:18.391 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:18.650 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:18.650 "name": "pt4", 00:34:18.650 "aliases": [ 00:34:18.650 "c8543650-cd26-55ba-a1b1-1a030e344398" 00:34:18.650 ], 00:34:18.650 "product_name": "passthru", 00:34:18.650 "block_size": 512, 00:34:18.650 "num_blocks": 65536, 00:34:18.650 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:18.650 "assigned_rate_limits": { 00:34:18.650 "rw_ios_per_sec": 0, 00:34:18.650 "rw_mbytes_per_sec": 0, 00:34:18.650 "r_mbytes_per_sec": 0, 00:34:18.650 "w_mbytes_per_sec": 0 00:34:18.650 }, 00:34:18.650 "claimed": true, 00:34:18.650 "claim_type": "exclusive_write", 00:34:18.650 "zoned": false, 00:34:18.650 "supported_io_types": { 00:34:18.650 "read": true, 00:34:18.650 "write": true, 00:34:18.650 "unmap": true, 00:34:18.650 "write_zeroes": true, 00:34:18.650 "flush": true, 00:34:18.650 "reset": true, 00:34:18.650 "compare": false, 00:34:18.650 "compare_and_write": false, 00:34:18.650 "abort": true, 00:34:18.650 "nvme_admin": false, 00:34:18.650 "nvme_io": false 00:34:18.650 }, 00:34:18.650 "memory_domains": [ 00:34:18.650 { 00:34:18.650 "dma_device_id": "system", 00:34:18.650 "dma_device_type": 1 00:34:18.650 }, 00:34:18.650 { 00:34:18.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.650 "dma_device_type": 2 00:34:18.650 } 00:34:18.650 ], 00:34:18.650 "driver_specific": { 00:34:18.650 "passthru": { 00:34:18.650 "name": "pt4", 00:34:18.650 "base_bdev_name": "malloc4" 00:34:18.650 } 00:34:18.650 } 00:34:18.650 }' 00:34:18.650 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:18.650 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:18.650 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:34:18.650 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:18.908 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:18.909 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:19.168 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:19.168 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:19.168 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:19.168 [2024-05-15 11:27:37.795378] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.427 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 446bd8bf-00aa-4a1b-96ec-c440ff71b591 '!=' 446bd8bf-00aa-4a1b-96ec-c440ff71b591 ']' 00:34:19.427 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:19.427 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:34:19.427 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:34:19.427 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:19.427 [2024-05-15 11:27:38.039313] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.427 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.686 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:19.686 "name": "raid_bdev1", 00:34:19.686 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:19.686 "strip_size_kb": 0, 00:34:19.686 "state": "online", 00:34:19.686 "raid_level": "raid1", 00:34:19.686 "superblock": true, 00:34:19.686 "num_base_bdevs": 4, 00:34:19.686 "num_base_bdevs_discovered": 3, 00:34:19.686 "num_base_bdevs_operational": 3, 00:34:19.686 "base_bdevs_list": [ 00:34:19.686 { 00:34:19.686 "name": null, 00:34:19.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.686 "is_configured": false, 00:34:19.686 "data_offset": 2048, 00:34:19.686 "data_size": 63488 00:34:19.686 }, 00:34:19.686 { 00:34:19.686 "name": "pt2", 00:34:19.686 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:19.686 "is_configured": true, 00:34:19.686 "data_offset": 2048, 00:34:19.686 "data_size": 63488 00:34:19.686 }, 00:34:19.686 { 00:34:19.686 "name": "pt3", 00:34:19.686 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:19.686 "is_configured": true, 00:34:19.686 "data_offset": 2048, 00:34:19.686 "data_size": 63488 00:34:19.686 }, 00:34:19.686 { 00:34:19.686 "name": "pt4", 00:34:19.686 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:19.686 "is_configured": true, 00:34:19.686 "data_offset": 2048, 00:34:19.686 "data_size": 63488 00:34:19.686 } 00:34:19.686 ] 00:34:19.686 }' 00:34:19.686 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:19.686 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.620 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:20.620 [2024-05-15 11:27:39.103486] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:20.620 [2024-05-15 11:27:39.103522] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.620 [2024-05-15 11:27:39.103635] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.620 [2024-05-15 11:27:39.103690] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.620 [2024-05-15 11:27:39.103702] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:34:20.620 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:20.620 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.878 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:20.878 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:20.878 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:20.878 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:20.878 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:21.137 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:21.137 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:21.137 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:21.395 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:21.653 [2024-05-15 11:27:40.175631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:21.653 [2024-05-15 11:27:40.175727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:21.653 [2024-05-15 11:27:40.175962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036680 00:34:21.653 [2024-05-15 11:27:40.176052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:21.653 [2024-05-15 11:27:40.178348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:21.653 [2024-05-15 11:27:40.178439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:21.653 [2024-05-15 11:27:40.178578] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:21.653 [2024-05-15 11:27:40.178649] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:21.653 pt2 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.653 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.912 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:21.912 "name": "raid_bdev1", 00:34:21.912 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:21.912 "strip_size_kb": 0, 00:34:21.912 "state": "configuring", 00:34:21.912 "raid_level": "raid1", 00:34:21.912 "superblock": true, 00:34:21.912 "num_base_bdevs": 4, 00:34:21.912 "num_base_bdevs_discovered": 1, 00:34:21.912 "num_base_bdevs_operational": 3, 00:34:21.912 "base_bdevs_list": [ 00:34:21.912 { 00:34:21.912 "name": null, 00:34:21.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.912 "is_configured": false, 00:34:21.912 "data_offset": 2048, 00:34:21.912 "data_size": 63488 00:34:21.912 }, 00:34:21.912 { 00:34:21.912 "name": "pt2", 00:34:21.912 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:21.912 "is_configured": true, 00:34:21.912 "data_offset": 2048, 00:34:21.912 "data_size": 63488 00:34:21.912 }, 00:34:21.912 { 00:34:21.912 "name": null, 00:34:21.912 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:21.912 "is_configured": false, 00:34:21.912 "data_offset": 2048, 00:34:21.912 "data_size": 63488 00:34:21.912 }, 00:34:21.912 { 00:34:21.912 "name": null, 00:34:21.912 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:21.912 "is_configured": false, 00:34:21.912 "data_offset": 2048, 00:34:21.912 "data_size": 63488 00:34:21.912 } 00:34:21.912 ] 00:34:21.912 }' 00:34:21.912 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:21.912 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.478 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:34:22.478 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:22.478 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:22.736 [2024-05-15 11:27:41.335859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:22.736 [2024-05-15 11:27:41.335983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.736 [2024-05-15 11:27:41.336038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037e80 00:34:22.736 [2024-05-15 11:27:41.336069] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.736 [2024-05-15 11:27:41.336460] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.736 [2024-05-15 11:27:41.336502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:22.736 [2024-05-15 11:27:41.336590] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:22.736 [2024-05-15 11:27:41.336616] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:22.736 pt3 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.736 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.995 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:22.995 "name": "raid_bdev1", 00:34:22.995 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:22.995 "strip_size_kb": 0, 00:34:22.995 "state": "configuring", 00:34:22.995 "raid_level": "raid1", 00:34:22.995 "superblock": true, 00:34:22.995 "num_base_bdevs": 4, 00:34:22.995 "num_base_bdevs_discovered": 2, 00:34:22.995 "num_base_bdevs_operational": 3, 00:34:22.995 "base_bdevs_list": [ 00:34:22.995 { 00:34:22.995 "name": null, 00:34:22.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.995 "is_configured": false, 00:34:22.995 "data_offset": 2048, 00:34:22.995 "data_size": 63488 00:34:22.995 }, 00:34:22.995 { 00:34:22.995 "name": "pt2", 00:34:22.995 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:22.995 "is_configured": true, 00:34:22.995 "data_offset": 2048, 00:34:22.995 "data_size": 63488 00:34:22.995 }, 00:34:22.995 { 00:34:22.995 "name": "pt3", 00:34:22.995 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:22.995 "is_configured": true, 00:34:22.995 "data_offset": 2048, 00:34:22.995 "data_size": 63488 00:34:22.995 }, 00:34:22.995 { 00:34:22.995 "name": null, 00:34:22.995 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:22.995 "is_configured": false, 00:34:22.995 "data_offset": 2048, 00:34:22.995 "data_size": 63488 00:34:22.995 } 00:34:22.995 ] 00:34:22.995 }' 00:34:22.995 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:22.995 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:23.929 [2024-05-15 11:27:42.492057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:23.929 [2024-05-15 11:27:42.492226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:23.929 [2024-05-15 11:27:42.492308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039380 00:34:23.929 [2024-05-15 11:27:42.492331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:23.929 [2024-05-15 11:27:42.492737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:23.929 [2024-05-15 11:27:42.492779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:23.929 [2024-05-15 11:27:42.492884] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:34:23.929 [2024-05-15 11:27:42.492940] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:23.929 [2024-05-15 11:27:42.493070] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:34:23.929 [2024-05-15 11:27:42.493084] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:23.929 [2024-05-15 11:27:42.493157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:23.929 [2024-05-15 11:27:42.493427] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:34:23.929 [2024-05-15 11:27:42.493444] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:34:23.929 [2024-05-15 11:27:42.493571] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:23.929 pt4 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.929 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.272 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:24.272 "name": "raid_bdev1", 00:34:24.272 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:24.272 "strip_size_kb": 0, 00:34:24.272 "state": "online", 00:34:24.272 "raid_level": "raid1", 00:34:24.272 "superblock": true, 00:34:24.272 "num_base_bdevs": 4, 00:34:24.272 "num_base_bdevs_discovered": 3, 00:34:24.272 "num_base_bdevs_operational": 3, 00:34:24.272 "base_bdevs_list": [ 00:34:24.272 { 00:34:24.272 "name": null, 00:34:24.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.272 "is_configured": false, 00:34:24.272 "data_offset": 2048, 00:34:24.272 "data_size": 63488 00:34:24.272 }, 00:34:24.272 { 00:34:24.272 "name": "pt2", 00:34:24.272 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:24.272 "is_configured": true, 00:34:24.272 "data_offset": 2048, 00:34:24.272 "data_size": 63488 00:34:24.272 }, 00:34:24.272 { 00:34:24.272 "name": "pt3", 00:34:24.272 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:24.272 "is_configured": true, 00:34:24.272 "data_offset": 2048, 00:34:24.272 "data_size": 63488 00:34:24.272 }, 00:34:24.272 { 00:34:24.272 "name": "pt4", 00:34:24.272 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:24.272 "is_configured": true, 00:34:24.272 "data_offset": 2048, 00:34:24.272 "data_size": 63488 00:34:24.272 } 00:34:24.272 ] 00:34:24.272 }' 00:34:24.272 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:24.272 11:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.837 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 4 -gt 2 ']' 00:34:24.837 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:25.095 [2024-05-15 11:27:43.596423] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:25.095 [2024-05-15 11:27:43.596477] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:25.095 [2024-05-15 11:27:43.596561] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:25.095 [2024-05-15 11:27:43.596632] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:25.095 [2024-05-15 11:27:43.596651] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:34:25.095 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.095 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:34:25.353 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:34:25.353 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:34:25.353 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:25.611 [2024-05-15 11:27:44.016484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:25.611 [2024-05-15 11:27:44.016581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:25.611 [2024-05-15 11:27:44.016634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003a880 00:34:25.611 [2024-05-15 11:27:44.016663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:25.611 [2024-05-15 11:27:44.018763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:25.611 [2024-05-15 11:27:44.018859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:25.611 [2024-05-15 11:27:44.018978] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:25.611 [2024-05-15 11:27:44.019077] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:25.611 pt1 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.611 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.870 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:25.870 "name": "raid_bdev1", 00:34:25.870 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:25.870 "strip_size_kb": 0, 00:34:25.870 "state": "configuring", 00:34:25.870 "raid_level": "raid1", 00:34:25.870 "superblock": true, 00:34:25.870 "num_base_bdevs": 4, 00:34:25.870 "num_base_bdevs_discovered": 1, 00:34:25.870 "num_base_bdevs_operational": 4, 00:34:25.870 "base_bdevs_list": [ 00:34:25.870 { 00:34:25.870 "name": "pt1", 00:34:25.870 "uuid": "31c79fa6-ab08-5528-ba28-ec7b624c241a", 00:34:25.870 "is_configured": true, 00:34:25.870 "data_offset": 2048, 00:34:25.870 "data_size": 63488 00:34:25.870 }, 00:34:25.870 { 00:34:25.870 "name": null, 00:34:25.870 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:25.870 "is_configured": false, 00:34:25.870 "data_offset": 2048, 00:34:25.870 "data_size": 63488 00:34:25.870 }, 00:34:25.870 { 00:34:25.870 "name": null, 00:34:25.870 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:25.870 "is_configured": false, 00:34:25.870 "data_offset": 2048, 00:34:25.870 "data_size": 63488 00:34:25.870 }, 00:34:25.870 { 00:34:25.870 "name": null, 00:34:25.870 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:25.870 "is_configured": false, 00:34:25.870 "data_offset": 2048, 00:34:25.870 "data_size": 63488 00:34:25.870 } 00:34:25.870 ] 00:34:25.870 }' 00:34:25.870 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:25.870 11:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:26.437 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:34:26.437 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:34:26.437 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:26.694 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:34:26.694 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:34:26.694 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:26.952 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:34:26.952 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:34:26.952 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:27.209 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:34:27.209 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:34:27.209 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=3 00:34:27.209 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:27.467 [2024-05-15 11:27:45.896693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:27.467 [2024-05-15 11:27:45.897037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:27.467 [2024-05-15 11:27:45.897101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003c080 00:34:27.467 [2024-05-15 11:27:45.897133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:27.467 [2024-05-15 11:27:45.897541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:27.467 [2024-05-15 11:27:45.897601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:27.467 [2024-05-15 11:27:45.897722] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:34:27.467 [2024-05-15 11:27:45.897743] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:27.467 [2024-05-15 11:27:45.897755] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:27.467 [2024-05-15 11:27:45.897780] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state configuring 00:34:27.467 [2024-05-15 11:27:45.897892] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:27.467 pt4 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.467 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.724 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:27.724 "name": "raid_bdev1", 00:34:27.724 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:27.724 "strip_size_kb": 0, 00:34:27.724 "state": "configuring", 00:34:27.724 "raid_level": "raid1", 00:34:27.724 "superblock": true, 00:34:27.724 "num_base_bdevs": 4, 00:34:27.724 "num_base_bdevs_discovered": 1, 00:34:27.724 "num_base_bdevs_operational": 3, 00:34:27.724 "base_bdevs_list": [ 00:34:27.724 { 00:34:27.724 "name": null, 00:34:27.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.724 "is_configured": false, 00:34:27.724 "data_offset": 2048, 00:34:27.724 "data_size": 63488 00:34:27.724 }, 00:34:27.724 { 00:34:27.724 "name": null, 00:34:27.724 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:27.724 "is_configured": false, 00:34:27.724 "data_offset": 2048, 00:34:27.724 "data_size": 63488 00:34:27.724 }, 00:34:27.724 { 00:34:27.724 "name": null, 00:34:27.724 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:27.724 "is_configured": false, 00:34:27.724 "data_offset": 2048, 00:34:27.724 "data_size": 63488 00:34:27.724 }, 00:34:27.724 { 00:34:27.724 "name": "pt4", 00:34:27.724 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:27.725 "is_configured": true, 00:34:27.725 "data_offset": 2048, 00:34:27.725 "data_size": 63488 00:34:27.725 } 00:34:27.725 ] 00:34:27.725 }' 00:34:27.725 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:27.725 11:27:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:28.289 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:34:28.289 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:34:28.290 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:28.547 [2024-05-15 11:27:47.097178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:28.547 [2024-05-15 11:27:47.097304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:28.547 [2024-05-15 11:27:47.097363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003d580 00:34:28.547 [2024-05-15 11:27:47.097411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:28.547 [2024-05-15 11:27:47.098037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:28.547 [2024-05-15 11:27:47.098104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:28.548 [2024-05-15 11:27:47.098218] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:28.548 [2024-05-15 11:27:47.098255] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:28.548 pt2 00:34:28.548 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:34:28.548 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:34:28.548 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:28.805 [2024-05-15 11:27:47.325186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:28.806 [2024-05-15 11:27:47.325307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:28.806 [2024-05-15 11:27:47.325367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003ea80 00:34:28.806 [2024-05-15 11:27:47.325411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:28.806 [2024-05-15 11:27:47.326084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:28.806 [2024-05-15 11:27:47.326148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:28.806 [2024-05-15 11:27:47.326252] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:28.806 [2024-05-15 11:27:47.326290] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:28.806 [2024-05-15 11:27:47.326399] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012300 00:34:28.806 [2024-05-15 11:27:47.326417] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:28.806 [2024-05-15 11:27:47.326545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:34:28.806 [2024-05-15 11:27:47.326801] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012300 00:34:28.806 [2024-05-15 11:27:47.326845] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012300 00:34:28.806 [2024-05-15 11:27:47.326968] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:28.806 pt3 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.806 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.064 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:29.064 "name": "raid_bdev1", 00:34:29.064 "uuid": "446bd8bf-00aa-4a1b-96ec-c440ff71b591", 00:34:29.064 "strip_size_kb": 0, 00:34:29.064 "state": "online", 00:34:29.064 "raid_level": "raid1", 00:34:29.064 "superblock": true, 00:34:29.064 "num_base_bdevs": 4, 00:34:29.064 "num_base_bdevs_discovered": 3, 00:34:29.064 "num_base_bdevs_operational": 3, 00:34:29.064 "base_bdevs_list": [ 00:34:29.064 { 00:34:29.064 "name": null, 00:34:29.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.064 "is_configured": false, 00:34:29.064 "data_offset": 2048, 00:34:29.064 "data_size": 63488 00:34:29.064 }, 00:34:29.064 { 00:34:29.064 "name": "pt2", 00:34:29.064 "uuid": "822e9695-3c0a-5608-b4e3-01ecbadbbc5c", 00:34:29.064 "is_configured": true, 00:34:29.064 "data_offset": 2048, 00:34:29.064 "data_size": 63488 00:34:29.064 }, 00:34:29.064 { 00:34:29.064 "name": "pt3", 00:34:29.064 "uuid": "3e8d39d9-a118-523c-ba4d-bcb835183a91", 00:34:29.064 "is_configured": true, 00:34:29.064 "data_offset": 2048, 00:34:29.064 "data_size": 63488 00:34:29.064 }, 00:34:29.064 { 00:34:29.064 "name": "pt4", 00:34:29.064 "uuid": "c8543650-cd26-55ba-a1b1-1a030e344398", 00:34:29.064 "is_configured": true, 00:34:29.064 "data_offset": 2048, 00:34:29.064 "data_size": 63488 00:34:29.064 } 00:34:29.064 ] 00:34:29.064 }' 00:34:29.064 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:29.064 11:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:34:29.999 [2024-05-15 11:27:48.529542] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 446bd8bf-00aa-4a1b-96ec-c440ff71b591 '!=' 446bd8bf-00aa-4a1b-96ec-c440ff71b591 ']' 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 72297 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 72297 ']' 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 72297 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72297 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:29.999 killing process with pid 72297 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72297' 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 72297 00:34:29.999 11:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 72297 00:34:29.999 [2024-05-15 11:27:48.572102] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:29.999 [2024-05-15 11:27:48.572214] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:29.999 [2024-05-15 11:27:48.572282] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:29.999 [2024-05-15 11:27:48.572295] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012300 name raid_bdev1, state offline 00:34:30.565 [2024-05-15 11:27:48.915010] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:31.500 ************************************ 00:34:31.500 END TEST raid_superblock_test 00:34:31.500 ************************************ 00:34:31.500 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:34:31.500 00:34:31.500 real 0m28.967s 00:34:31.500 user 0m54.562s 00:34:31.500 sys 0m2.873s 00:34:31.500 11:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:31.500 11:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:31.500 11:27:50 bdev_raid -- bdev/bdev_raid.sh@821 -- # '[' '' = true ']' 00:34:31.500 11:27:50 bdev_raid -- bdev/bdev_raid.sh@830 -- # '[' n == y ']' 00:34:31.500 11:27:50 bdev_raid -- bdev/bdev_raid.sh@842 -- # base_blocklen=4096 00:34:31.500 11:27:50 bdev_raid -- bdev/bdev_raid.sh@844 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:34:31.500 11:27:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:34:31.500 11:27:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:31.500 11:27:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:31.759 ************************************ 00:34:31.759 START TEST raid_state_function_test_sb_4k 00:34:31.759 ************************************ 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:31.759 Process raid pid: 73206 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=73206 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 73206' 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 73206 /var/tmp/spdk-raid.sock 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 73206 ']' 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:31.759 11:27:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:31.759 [2024-05-15 11:27:50.302788] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:34:31.759 [2024-05-15 11:27:50.303076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.018 [2024-05-15 11:27:50.472466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.277 [2024-05-15 11:27:50.729517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.536 [2024-05-15 11:27:50.933753] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:32.536 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:32.536 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:34:32.536 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:32.843 [2024-05-15 11:27:51.308956] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:32.843 [2024-05-15 11:27:51.309038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:32.843 [2024-05-15 11:27:51.309069] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:32.843 [2024-05-15 11:27:51.309087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.843 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.117 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:33.117 "name": "Existed_Raid", 00:34:33.117 "uuid": "6db7fd61-6add-4323-a09a-ab3ffa9b3b83", 00:34:33.117 "strip_size_kb": 0, 00:34:33.117 "state": "configuring", 00:34:33.117 "raid_level": "raid1", 00:34:33.117 "superblock": true, 00:34:33.117 "num_base_bdevs": 2, 00:34:33.117 "num_base_bdevs_discovered": 0, 00:34:33.117 "num_base_bdevs_operational": 2, 00:34:33.117 "base_bdevs_list": [ 00:34:33.117 { 00:34:33.117 "name": "BaseBdev1", 00:34:33.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:33.117 "is_configured": false, 00:34:33.117 "data_offset": 0, 00:34:33.117 "data_size": 0 00:34:33.117 }, 00:34:33.117 { 00:34:33.117 "name": "BaseBdev2", 00:34:33.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:33.117 "is_configured": false, 00:34:33.117 "data_offset": 0, 00:34:33.117 "data_size": 0 00:34:33.117 } 00:34:33.117 ] 00:34:33.117 }' 00:34:33.117 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:33.117 11:27:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:33.683 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:33.941 [2024-05-15 11:27:52.357004] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:33.941 [2024-05-15 11:27:52.357049] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:34:33.941 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:33.941 [2024-05-15 11:27:52.557013] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:33.941 [2024-05-15 11:27:52.557133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:33.941 [2024-05-15 11:27:52.557153] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:33.941 [2024-05-15 11:27:52.557186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:33.941 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:34:34.198 [2024-05-15 11:27:52.794376] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:34.198 BaseBdev1 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:34.198 11:27:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:34.456 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:34.713 [ 00:34:34.713 { 00:34:34.713 "name": "BaseBdev1", 00:34:34.713 "aliases": [ 00:34:34.713 "43400c70-bab2-45e2-9e6d-fe0edfc93fba" 00:34:34.713 ], 00:34:34.713 "product_name": "Malloc disk", 00:34:34.713 "block_size": 4096, 00:34:34.713 "num_blocks": 8192, 00:34:34.713 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:34.713 "assigned_rate_limits": { 00:34:34.713 "rw_ios_per_sec": 0, 00:34:34.713 "rw_mbytes_per_sec": 0, 00:34:34.713 "r_mbytes_per_sec": 0, 00:34:34.713 "w_mbytes_per_sec": 0 00:34:34.713 }, 00:34:34.713 "claimed": true, 00:34:34.713 "claim_type": "exclusive_write", 00:34:34.713 "zoned": false, 00:34:34.713 "supported_io_types": { 00:34:34.713 "read": true, 00:34:34.713 "write": true, 00:34:34.713 "unmap": true, 00:34:34.713 "write_zeroes": true, 00:34:34.713 "flush": true, 00:34:34.713 "reset": true, 00:34:34.713 "compare": false, 00:34:34.713 "compare_and_write": false, 00:34:34.713 "abort": true, 00:34:34.713 "nvme_admin": false, 00:34:34.713 "nvme_io": false 00:34:34.713 }, 00:34:34.713 "memory_domains": [ 00:34:34.713 { 00:34:34.713 "dma_device_id": "system", 00:34:34.713 "dma_device_type": 1 00:34:34.713 }, 00:34:34.713 { 00:34:34.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.713 "dma_device_type": 2 00:34:34.713 } 00:34:34.713 ], 00:34:34.713 "driver_specific": {} 00:34:34.713 } 00:34:34.713 ] 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.713 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.972 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:34.972 "name": "Existed_Raid", 00:34:34.972 "uuid": "4c576986-ba6c-43d6-91de-9137e3af7ab3", 00:34:34.972 "strip_size_kb": 0, 00:34:34.972 "state": "configuring", 00:34:34.972 "raid_level": "raid1", 00:34:34.972 "superblock": true, 00:34:34.972 "num_base_bdevs": 2, 00:34:34.972 "num_base_bdevs_discovered": 1, 00:34:34.972 "num_base_bdevs_operational": 2, 00:34:34.972 "base_bdevs_list": [ 00:34:34.972 { 00:34:34.972 "name": "BaseBdev1", 00:34:34.972 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:34.972 "is_configured": true, 00:34:34.972 "data_offset": 256, 00:34:34.972 "data_size": 7936 00:34:34.972 }, 00:34:34.972 { 00:34:34.972 "name": "BaseBdev2", 00:34:34.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.972 "is_configured": false, 00:34:34.972 "data_offset": 0, 00:34:34.972 "data_size": 0 00:34:34.972 } 00:34:34.972 ] 00:34:34.972 }' 00:34:34.972 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:34.972 11:27:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:35.538 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:35.797 [2024-05-15 11:27:54.342564] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:35.797 [2024-05-15 11:27:54.342643] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:34:35.797 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:36.056 [2024-05-15 11:27:54.538681] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:36.056 [2024-05-15 11:27:54.540424] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:36.056 [2024-05-15 11:27:54.540482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.057 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:36.315 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:36.315 "name": "Existed_Raid", 00:34:36.315 "uuid": "3a2014bc-f27e-442e-8778-5f4535e40499", 00:34:36.315 "strip_size_kb": 0, 00:34:36.315 "state": "configuring", 00:34:36.315 "raid_level": "raid1", 00:34:36.315 "superblock": true, 00:34:36.315 "num_base_bdevs": 2, 00:34:36.315 "num_base_bdevs_discovered": 1, 00:34:36.315 "num_base_bdevs_operational": 2, 00:34:36.315 "base_bdevs_list": [ 00:34:36.315 { 00:34:36.315 "name": "BaseBdev1", 00:34:36.315 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:36.315 "is_configured": true, 00:34:36.315 "data_offset": 256, 00:34:36.315 "data_size": 7936 00:34:36.315 }, 00:34:36.315 { 00:34:36.315 "name": "BaseBdev2", 00:34:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.315 "is_configured": false, 00:34:36.315 "data_offset": 0, 00:34:36.315 "data_size": 0 00:34:36.315 } 00:34:36.315 ] 00:34:36.315 }' 00:34:36.315 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:36.315 11:27:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:34:37.289 [2024-05-15 11:27:55.736752] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:37.289 [2024-05-15 11:27:55.737172] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:34:37.289 [2024-05-15 11:27:55.737206] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:37.289 [2024-05-15 11:27:55.737349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:34:37.289 [2024-05-15 11:27:55.737652] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:34:37.289 [2024-05-15 11:27:55.737672] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:34:37.289 [2024-05-15 11:27:55.737863] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.289 BaseBdev2 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:37.289 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:37.548 11:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:37.548 [ 00:34:37.548 { 00:34:37.548 "name": "BaseBdev2", 00:34:37.548 "aliases": [ 00:34:37.548 "15cf2f43-5889-42cf-9736-0c17c259c8e1" 00:34:37.548 ], 00:34:37.548 "product_name": "Malloc disk", 00:34:37.548 "block_size": 4096, 00:34:37.548 "num_blocks": 8192, 00:34:37.548 "uuid": "15cf2f43-5889-42cf-9736-0c17c259c8e1", 00:34:37.548 "assigned_rate_limits": { 00:34:37.548 "rw_ios_per_sec": 0, 00:34:37.548 "rw_mbytes_per_sec": 0, 00:34:37.548 "r_mbytes_per_sec": 0, 00:34:37.548 "w_mbytes_per_sec": 0 00:34:37.548 }, 00:34:37.548 "claimed": true, 00:34:37.548 "claim_type": "exclusive_write", 00:34:37.548 "zoned": false, 00:34:37.548 "supported_io_types": { 00:34:37.548 "read": true, 00:34:37.548 "write": true, 00:34:37.548 "unmap": true, 00:34:37.548 "write_zeroes": true, 00:34:37.548 "flush": true, 00:34:37.548 "reset": true, 00:34:37.548 "compare": false, 00:34:37.548 "compare_and_write": false, 00:34:37.548 "abort": true, 00:34:37.548 "nvme_admin": false, 00:34:37.548 "nvme_io": false 00:34:37.548 }, 00:34:37.548 "memory_domains": [ 00:34:37.548 { 00:34:37.548 "dma_device_id": "system", 00:34:37.548 "dma_device_type": 1 00:34:37.548 }, 00:34:37.548 { 00:34:37.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:37.548 "dma_device_type": 2 00:34:37.548 } 00:34:37.548 ], 00:34:37.548 "driver_specific": {} 00:34:37.548 } 00:34:37.548 ] 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:37.548 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.807 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:37.807 "name": "Existed_Raid", 00:34:37.807 "uuid": "3a2014bc-f27e-442e-8778-5f4535e40499", 00:34:37.807 "strip_size_kb": 0, 00:34:37.807 "state": "online", 00:34:37.807 "raid_level": "raid1", 00:34:37.807 "superblock": true, 00:34:37.807 "num_base_bdevs": 2, 00:34:37.807 "num_base_bdevs_discovered": 2, 00:34:37.807 "num_base_bdevs_operational": 2, 00:34:37.807 "base_bdevs_list": [ 00:34:37.807 { 00:34:37.807 "name": "BaseBdev1", 00:34:37.807 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:37.807 "is_configured": true, 00:34:37.807 "data_offset": 256, 00:34:37.807 "data_size": 7936 00:34:37.807 }, 00:34:37.807 { 00:34:37.807 "name": "BaseBdev2", 00:34:37.807 "uuid": "15cf2f43-5889-42cf-9736-0c17c259c8e1", 00:34:37.807 "is_configured": true, 00:34:37.807 "data_offset": 256, 00:34:37.807 "data_size": 7936 00:34:37.807 } 00:34:37.808 ] 00:34:37.808 }' 00:34:37.808 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:37.808 11:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:34:38.742 [2024-05-15 11:27:57.337219] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:34:38.742 "name": "Existed_Raid", 00:34:38.742 "aliases": [ 00:34:38.742 "3a2014bc-f27e-442e-8778-5f4535e40499" 00:34:38.742 ], 00:34:38.742 "product_name": "Raid Volume", 00:34:38.742 "block_size": 4096, 00:34:38.742 "num_blocks": 7936, 00:34:38.742 "uuid": "3a2014bc-f27e-442e-8778-5f4535e40499", 00:34:38.742 "assigned_rate_limits": { 00:34:38.742 "rw_ios_per_sec": 0, 00:34:38.742 "rw_mbytes_per_sec": 0, 00:34:38.742 "r_mbytes_per_sec": 0, 00:34:38.742 "w_mbytes_per_sec": 0 00:34:38.742 }, 00:34:38.742 "claimed": false, 00:34:38.742 "zoned": false, 00:34:38.742 "supported_io_types": { 00:34:38.742 "read": true, 00:34:38.742 "write": true, 00:34:38.742 "unmap": false, 00:34:38.742 "write_zeroes": true, 00:34:38.742 "flush": false, 00:34:38.742 "reset": true, 00:34:38.742 "compare": false, 00:34:38.742 "compare_and_write": false, 00:34:38.742 "abort": false, 00:34:38.742 "nvme_admin": false, 00:34:38.742 "nvme_io": false 00:34:38.742 }, 00:34:38.742 "memory_domains": [ 00:34:38.742 { 00:34:38.742 "dma_device_id": "system", 00:34:38.742 "dma_device_type": 1 00:34:38.742 }, 00:34:38.742 { 00:34:38.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.742 "dma_device_type": 2 00:34:38.742 }, 00:34:38.742 { 00:34:38.742 "dma_device_id": "system", 00:34:38.742 "dma_device_type": 1 00:34:38.742 }, 00:34:38.742 { 00:34:38.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.742 "dma_device_type": 2 00:34:38.742 } 00:34:38.742 ], 00:34:38.742 "driver_specific": { 00:34:38.742 "raid": { 00:34:38.742 "uuid": "3a2014bc-f27e-442e-8778-5f4535e40499", 00:34:38.742 "strip_size_kb": 0, 00:34:38.742 "state": "online", 00:34:38.742 "raid_level": "raid1", 00:34:38.742 "superblock": true, 00:34:38.742 "num_base_bdevs": 2, 00:34:38.742 "num_base_bdevs_discovered": 2, 00:34:38.742 "num_base_bdevs_operational": 2, 00:34:38.742 "base_bdevs_list": [ 00:34:38.742 { 00:34:38.742 "name": "BaseBdev1", 00:34:38.742 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:38.742 "is_configured": true, 00:34:38.742 "data_offset": 256, 00:34:38.742 "data_size": 7936 00:34:38.742 }, 00:34:38.742 { 00:34:38.742 "name": "BaseBdev2", 00:34:38.742 "uuid": "15cf2f43-5889-42cf-9736-0c17c259c8e1", 00:34:38.742 "is_configured": true, 00:34:38.742 "data_offset": 256, 00:34:38.742 "data_size": 7936 00:34:38.742 } 00:34:38.742 ] 00:34:38.742 } 00:34:38.742 } 00:34:38.742 }' 00:34:38.742 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:39.001 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:34:39.001 BaseBdev2' 00:34:39.001 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:39.001 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:39.001 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:39.260 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:39.260 "name": "BaseBdev1", 00:34:39.260 "aliases": [ 00:34:39.260 "43400c70-bab2-45e2-9e6d-fe0edfc93fba" 00:34:39.260 ], 00:34:39.260 "product_name": "Malloc disk", 00:34:39.260 "block_size": 4096, 00:34:39.260 "num_blocks": 8192, 00:34:39.260 "uuid": "43400c70-bab2-45e2-9e6d-fe0edfc93fba", 00:34:39.260 "assigned_rate_limits": { 00:34:39.260 "rw_ios_per_sec": 0, 00:34:39.260 "rw_mbytes_per_sec": 0, 00:34:39.260 "r_mbytes_per_sec": 0, 00:34:39.260 "w_mbytes_per_sec": 0 00:34:39.260 }, 00:34:39.260 "claimed": true, 00:34:39.260 "claim_type": "exclusive_write", 00:34:39.260 "zoned": false, 00:34:39.260 "supported_io_types": { 00:34:39.260 "read": true, 00:34:39.260 "write": true, 00:34:39.260 "unmap": true, 00:34:39.260 "write_zeroes": true, 00:34:39.261 "flush": true, 00:34:39.261 "reset": true, 00:34:39.261 "compare": false, 00:34:39.261 "compare_and_write": false, 00:34:39.261 "abort": true, 00:34:39.261 "nvme_admin": false, 00:34:39.261 "nvme_io": false 00:34:39.261 }, 00:34:39.261 "memory_domains": [ 00:34:39.261 { 00:34:39.261 "dma_device_id": "system", 00:34:39.261 "dma_device_type": 1 00:34:39.261 }, 00:34:39.261 { 00:34:39.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.261 "dma_device_type": 2 00:34:39.261 } 00:34:39.261 ], 00:34:39.261 "driver_specific": {} 00:34:39.261 }' 00:34:39.261 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:39.261 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:39.261 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:39.261 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:39.261 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:39.520 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:39.520 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:39.520 11:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:39.520 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:39.520 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:39.520 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:39.779 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:39.779 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:39.779 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:39.779 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:40.037 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:40.037 "name": "BaseBdev2", 00:34:40.037 "aliases": [ 00:34:40.037 "15cf2f43-5889-42cf-9736-0c17c259c8e1" 00:34:40.037 ], 00:34:40.037 "product_name": "Malloc disk", 00:34:40.037 "block_size": 4096, 00:34:40.037 "num_blocks": 8192, 00:34:40.037 "uuid": "15cf2f43-5889-42cf-9736-0c17c259c8e1", 00:34:40.037 "assigned_rate_limits": { 00:34:40.037 "rw_ios_per_sec": 0, 00:34:40.037 "rw_mbytes_per_sec": 0, 00:34:40.037 "r_mbytes_per_sec": 0, 00:34:40.037 "w_mbytes_per_sec": 0 00:34:40.037 }, 00:34:40.037 "claimed": true, 00:34:40.037 "claim_type": "exclusive_write", 00:34:40.037 "zoned": false, 00:34:40.037 "supported_io_types": { 00:34:40.037 "read": true, 00:34:40.037 "write": true, 00:34:40.037 "unmap": true, 00:34:40.037 "write_zeroes": true, 00:34:40.037 "flush": true, 00:34:40.037 "reset": true, 00:34:40.037 "compare": false, 00:34:40.037 "compare_and_write": false, 00:34:40.037 "abort": true, 00:34:40.037 "nvme_admin": false, 00:34:40.037 "nvme_io": false 00:34:40.037 }, 00:34:40.037 "memory_domains": [ 00:34:40.037 { 00:34:40.037 "dma_device_id": "system", 00:34:40.037 "dma_device_type": 1 00:34:40.037 }, 00:34:40.037 { 00:34:40.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.037 "dma_device_type": 2 00:34:40.037 } 00:34:40.037 ], 00:34:40.037 "driver_specific": {} 00:34:40.037 }' 00:34:40.037 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:40.037 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:40.037 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:40.038 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:40.038 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:40.296 11:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:40.555 [2024-05-15 11:27:59.109490] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:40.813 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:34:40.813 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:40.814 "name": "Existed_Raid", 00:34:40.814 "uuid": "3a2014bc-f27e-442e-8778-5f4535e40499", 00:34:40.814 "strip_size_kb": 0, 00:34:40.814 "state": "online", 00:34:40.814 "raid_level": "raid1", 00:34:40.814 "superblock": true, 00:34:40.814 "num_base_bdevs": 2, 00:34:40.814 "num_base_bdevs_discovered": 1, 00:34:40.814 "num_base_bdevs_operational": 1, 00:34:40.814 "base_bdevs_list": [ 00:34:40.814 { 00:34:40.814 "name": null, 00:34:40.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.814 "is_configured": false, 00:34:40.814 "data_offset": 256, 00:34:40.814 "data_size": 7936 00:34:40.814 }, 00:34:40.814 { 00:34:40.814 "name": "BaseBdev2", 00:34:40.814 "uuid": "15cf2f43-5889-42cf-9736-0c17c259c8e1", 00:34:40.814 "is_configured": true, 00:34:40.814 "data_offset": 256, 00:34:40.814 "data_size": 7936 00:34:40.814 } 00:34:40.814 ] 00:34:40.814 }' 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:40.814 11:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:41.747 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:41.747 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:41.747 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.747 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:34:42.010 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:34:42.010 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:42.010 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:42.010 [2024-05-15 11:28:00.642923] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:42.010 [2024-05-15 11:28:00.643011] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:42.266 [2024-05-15 11:28:00.722944] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:42.266 [2024-05-15 11:28:00.723070] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:42.266 [2024-05-15 11:28:00.723085] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:34:42.266 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:42.266 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:42.266 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.266 11:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 73206 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 73206 ']' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 73206 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73206 00:34:42.523 killing process with pid 73206 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73206' 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 73206 00:34:42.523 11:28:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 73206 00:34:42.523 [2024-05-15 11:28:01.093463] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:42.523 [2024-05-15 11:28:01.093572] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:43.897 11:28:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:34:43.897 ************************************ 00:34:43.897 END TEST raid_state_function_test_sb_4k 00:34:43.897 ************************************ 00:34:43.897 00:34:43.897 real 0m12.204s 00:34:43.897 user 0m21.676s 00:34:43.897 sys 0m1.323s 00:34:43.897 11:28:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:43.897 11:28:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:43.898 11:28:02 bdev_raid -- bdev/bdev_raid.sh@845 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:34:43.898 11:28:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:34:43.898 11:28:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:43.898 11:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:43.898 ************************************ 00:34:43.898 START TEST raid_superblock_test_4k 00:34:43.898 ************************************ 00:34:43.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=73591 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 73591 /var/tmp/spdk-raid.sock 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 73591 ']' 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:43.898 11:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:44.157 [2024-05-15 11:28:02.552332] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:34:44.157 [2024-05-15 11:28:02.552551] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73591 ] 00:34:44.157 [2024-05-15 11:28:02.726164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.415 [2024-05-15 11:28:02.992566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.674 [2024-05-15 11:28:03.203045] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:44.933 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:34:45.191 malloc1 00:34:45.192 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:45.451 [2024-05-15 11:28:03.851908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:45.451 [2024-05-15 11:28:03.852030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.451 [2024-05-15 11:28:03.852107] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:34:45.451 [2024-05-15 11:28:03.852209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.451 [2024-05-15 11:28:03.854049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.451 [2024-05-15 11:28:03.854092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:45.451 pt1 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:45.451 11:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:34:45.451 malloc2 00:34:45.710 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:45.710 [2024-05-15 11:28:04.279391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:45.710 [2024-05-15 11:28:04.279529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.710 [2024-05-15 11:28:04.279579] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:34:45.710 [2024-05-15 11:28:04.279646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.710 [2024-05-15 11:28:04.281711] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.710 [2024-05-15 11:28:04.281778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:45.710 pt2 00:34:45.710 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:45.710 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:45.710 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:34:45.969 [2024-05-15 11:28:04.471525] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:45.969 [2024-05-15 11:28:04.473013] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:45.969 [2024-05-15 11:28:04.473195] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:34:45.969 [2024-05-15 11:28:04.473211] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:45.969 [2024-05-15 11:28:04.473351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:34:45.969 [2024-05-15 11:28:04.473623] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:34:45.969 [2024-05-15 11:28:04.473638] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:34:45.969 [2024-05-15 11:28:04.473750] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.969 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:46.226 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:46.226 "name": "raid_bdev1", 00:34:46.226 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:46.226 "strip_size_kb": 0, 00:34:46.226 "state": "online", 00:34:46.226 "raid_level": "raid1", 00:34:46.226 "superblock": true, 00:34:46.226 "num_base_bdevs": 2, 00:34:46.226 "num_base_bdevs_discovered": 2, 00:34:46.226 "num_base_bdevs_operational": 2, 00:34:46.226 "base_bdevs_list": [ 00:34:46.226 { 00:34:46.226 "name": "pt1", 00:34:46.226 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:46.226 "is_configured": true, 00:34:46.226 "data_offset": 256, 00:34:46.226 "data_size": 7936 00:34:46.226 }, 00:34:46.226 { 00:34:46.226 "name": "pt2", 00:34:46.226 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:46.226 "is_configured": true, 00:34:46.226 "data_offset": 256, 00:34:46.226 "data_size": 7936 00:34:46.226 } 00:34:46.226 ] 00:34:46.226 }' 00:34:46.226 11:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:46.226 11:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:46.790 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:34:47.048 [2024-05-15 11:28:05.491909] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:34:47.048 "name": "raid_bdev1", 00:34:47.048 "aliases": [ 00:34:47.048 "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0" 00:34:47.048 ], 00:34:47.048 "product_name": "Raid Volume", 00:34:47.048 "block_size": 4096, 00:34:47.048 "num_blocks": 7936, 00:34:47.048 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:47.048 "assigned_rate_limits": { 00:34:47.048 "rw_ios_per_sec": 0, 00:34:47.048 "rw_mbytes_per_sec": 0, 00:34:47.048 "r_mbytes_per_sec": 0, 00:34:47.048 "w_mbytes_per_sec": 0 00:34:47.048 }, 00:34:47.048 "claimed": false, 00:34:47.048 "zoned": false, 00:34:47.048 "supported_io_types": { 00:34:47.048 "read": true, 00:34:47.048 "write": true, 00:34:47.048 "unmap": false, 00:34:47.048 "write_zeroes": true, 00:34:47.048 "flush": false, 00:34:47.048 "reset": true, 00:34:47.048 "compare": false, 00:34:47.048 "compare_and_write": false, 00:34:47.048 "abort": false, 00:34:47.048 "nvme_admin": false, 00:34:47.048 "nvme_io": false 00:34:47.048 }, 00:34:47.048 "memory_domains": [ 00:34:47.048 { 00:34:47.048 "dma_device_id": "system", 00:34:47.048 "dma_device_type": 1 00:34:47.048 }, 00:34:47.048 { 00:34:47.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:47.048 "dma_device_type": 2 00:34:47.048 }, 00:34:47.048 { 00:34:47.048 "dma_device_id": "system", 00:34:47.048 "dma_device_type": 1 00:34:47.048 }, 00:34:47.048 { 00:34:47.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:47.048 "dma_device_type": 2 00:34:47.048 } 00:34:47.048 ], 00:34:47.048 "driver_specific": { 00:34:47.048 "raid": { 00:34:47.048 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:47.048 "strip_size_kb": 0, 00:34:47.048 "state": "online", 00:34:47.048 "raid_level": "raid1", 00:34:47.048 "superblock": true, 00:34:47.048 "num_base_bdevs": 2, 00:34:47.048 "num_base_bdevs_discovered": 2, 00:34:47.048 "num_base_bdevs_operational": 2, 00:34:47.048 "base_bdevs_list": [ 00:34:47.048 { 00:34:47.048 "name": "pt1", 00:34:47.048 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:47.048 "is_configured": true, 00:34:47.048 "data_offset": 256, 00:34:47.048 "data_size": 7936 00:34:47.048 }, 00:34:47.048 { 00:34:47.048 "name": "pt2", 00:34:47.048 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:47.048 "is_configured": true, 00:34:47.048 "data_offset": 256, 00:34:47.048 "data_size": 7936 00:34:47.048 } 00:34:47.048 ] 00:34:47.048 } 00:34:47.048 } 00:34:47.048 }' 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:34:47.048 pt2' 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:47.048 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:47.307 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:47.307 "name": "pt1", 00:34:47.307 "aliases": [ 00:34:47.307 "b0b7a73e-656a-588f-a0e9-b402941cb19a" 00:34:47.307 ], 00:34:47.307 "product_name": "passthru", 00:34:47.307 "block_size": 4096, 00:34:47.307 "num_blocks": 8192, 00:34:47.307 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:47.307 "assigned_rate_limits": { 00:34:47.307 "rw_ios_per_sec": 0, 00:34:47.307 "rw_mbytes_per_sec": 0, 00:34:47.307 "r_mbytes_per_sec": 0, 00:34:47.307 "w_mbytes_per_sec": 0 00:34:47.307 }, 00:34:47.307 "claimed": true, 00:34:47.307 "claim_type": "exclusive_write", 00:34:47.307 "zoned": false, 00:34:47.307 "supported_io_types": { 00:34:47.307 "read": true, 00:34:47.307 "write": true, 00:34:47.307 "unmap": true, 00:34:47.307 "write_zeroes": true, 00:34:47.307 "flush": true, 00:34:47.307 "reset": true, 00:34:47.307 "compare": false, 00:34:47.307 "compare_and_write": false, 00:34:47.307 "abort": true, 00:34:47.307 "nvme_admin": false, 00:34:47.307 "nvme_io": false 00:34:47.307 }, 00:34:47.307 "memory_domains": [ 00:34:47.307 { 00:34:47.307 "dma_device_id": "system", 00:34:47.307 "dma_device_type": 1 00:34:47.307 }, 00:34:47.307 { 00:34:47.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:47.307 "dma_device_type": 2 00:34:47.307 } 00:34:47.307 ], 00:34:47.307 "driver_specific": { 00:34:47.307 "passthru": { 00:34:47.307 "name": "pt1", 00:34:47.307 "base_bdev_name": "malloc1" 00:34:47.307 } 00:34:47.307 } 00:34:47.307 }' 00:34:47.307 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:47.307 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:47.307 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:47.307 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:47.566 11:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:47.566 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:47.823 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:47.823 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:47.823 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:47.824 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:48.082 "name": "pt2", 00:34:48.082 "aliases": [ 00:34:48.082 "85c7825f-029a-57e8-84d4-016ac7265fbb" 00:34:48.082 ], 00:34:48.082 "product_name": "passthru", 00:34:48.082 "block_size": 4096, 00:34:48.082 "num_blocks": 8192, 00:34:48.082 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:48.082 "assigned_rate_limits": { 00:34:48.082 "rw_ios_per_sec": 0, 00:34:48.082 "rw_mbytes_per_sec": 0, 00:34:48.082 "r_mbytes_per_sec": 0, 00:34:48.082 "w_mbytes_per_sec": 0 00:34:48.082 }, 00:34:48.082 "claimed": true, 00:34:48.082 "claim_type": "exclusive_write", 00:34:48.082 "zoned": false, 00:34:48.082 "supported_io_types": { 00:34:48.082 "read": true, 00:34:48.082 "write": true, 00:34:48.082 "unmap": true, 00:34:48.082 "write_zeroes": true, 00:34:48.082 "flush": true, 00:34:48.082 "reset": true, 00:34:48.082 "compare": false, 00:34:48.082 "compare_and_write": false, 00:34:48.082 "abort": true, 00:34:48.082 "nvme_admin": false, 00:34:48.082 "nvme_io": false 00:34:48.082 }, 00:34:48.082 "memory_domains": [ 00:34:48.082 { 00:34:48.082 "dma_device_id": "system", 00:34:48.082 "dma_device_type": 1 00:34:48.082 }, 00:34:48.082 { 00:34:48.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.082 "dma_device_type": 2 00:34:48.082 } 00:34:48.082 ], 00:34:48.082 "driver_specific": { 00:34:48.082 "passthru": { 00:34:48.082 "name": "pt2", 00:34:48.082 "base_bdev_name": "malloc2" 00:34:48.082 } 00:34:48.082 } 00:34:48.082 }' 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:48.082 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:48.340 11:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:48.599 [2024-05-15 11:28:07.176212] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:48.599 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 00:34:48.599 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 ']' 00:34:48.599 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:48.857 [2024-05-15 11:28:07.424251] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:48.857 [2024-05-15 11:28:07.424286] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:48.857 [2024-05-15 11:28:07.424365] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:48.857 [2024-05-15 11:28:07.424416] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:48.857 [2024-05-15 11:28:07.424428] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:34:48.857 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.857 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:49.115 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:49.115 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:49.115 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.115 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:49.373 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.373 11:28:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:49.631 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:49.631 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:49.890 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:50.149 [2024-05-15 11:28:08.536518] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:50.149 [2024-05-15 11:28:08.538079] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:50.149 [2024-05-15 11:28:08.538146] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:50.149 [2024-05-15 11:28:08.538218] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:50.149 [2024-05-15 11:28:08.538274] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:50.149 [2024-05-15 11:28:08.538288] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:34:50.149 request: 00:34:50.149 { 00:34:50.149 "name": "raid_bdev1", 00:34:50.149 "raid_level": "raid1", 00:34:50.149 "base_bdevs": [ 00:34:50.149 "malloc1", 00:34:50.149 "malloc2" 00:34:50.149 ], 00:34:50.149 "superblock": false, 00:34:50.149 "method": "bdev_raid_create", 00:34:50.149 "req_id": 1 00:34:50.149 } 00:34:50.149 Got JSON-RPC error response 00:34:50.149 response: 00:34:50.149 { 00:34:50.149 "code": -17, 00:34:50.149 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:50.149 } 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:50.149 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.408 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:50.408 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:50.408 11:28:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:50.667 [2024-05-15 11:28:09.060559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:50.668 [2024-05-15 11:28:09.060725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.668 [2024-05-15 11:28:09.060777] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:34:50.668 [2024-05-15 11:28:09.061083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.668 pt1 00:34:50.668 [2024-05-15 11:28:09.063078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.668 [2024-05-15 11:28:09.063135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:50.668 [2024-05-15 11:28:09.063229] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:50.668 [2024-05-15 11:28:09.063296] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.668 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.926 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:50.926 "name": "raid_bdev1", 00:34:50.926 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:50.926 "strip_size_kb": 0, 00:34:50.926 "state": "configuring", 00:34:50.926 "raid_level": "raid1", 00:34:50.926 "superblock": true, 00:34:50.926 "num_base_bdevs": 2, 00:34:50.926 "num_base_bdevs_discovered": 1, 00:34:50.926 "num_base_bdevs_operational": 2, 00:34:50.926 "base_bdevs_list": [ 00:34:50.926 { 00:34:50.926 "name": "pt1", 00:34:50.926 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:50.926 "is_configured": true, 00:34:50.926 "data_offset": 256, 00:34:50.926 "data_size": 7936 00:34:50.926 }, 00:34:50.926 { 00:34:50.926 "name": null, 00:34:50.926 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:50.926 "is_configured": false, 00:34:50.926 "data_offset": 256, 00:34:50.926 "data_size": 7936 00:34:50.926 } 00:34:50.926 ] 00:34:50.926 }' 00:34:50.926 11:28:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:50.926 11:28:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:51.494 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:51.494 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:51.494 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:51.494 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:51.754 [2024-05-15 11:28:10.264955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:51.754 [2024-05-15 11:28:10.265074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.754 [2024-05-15 11:28:10.265125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:34:51.754 [2024-05-15 11:28:10.265154] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.754 [2024-05-15 11:28:10.265572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.754 [2024-05-15 11:28:10.265609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:51.754 [2024-05-15 11:28:10.265690] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:51.754 [2024-05-15 11:28:10.265729] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:51.754 [2024-05-15 11:28:10.265829] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:34:51.754 [2024-05-15 11:28:10.265842] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:51.754 [2024-05-15 11:28:10.265933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:51.754 [2024-05-15 11:28:10.266183] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:34:51.754 [2024-05-15 11:28:10.266199] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:34:51.754 [2024-05-15 11:28:10.266299] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.754 pt2 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.754 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.012 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:52.012 "name": "raid_bdev1", 00:34:52.012 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:52.012 "strip_size_kb": 0, 00:34:52.012 "state": "online", 00:34:52.012 "raid_level": "raid1", 00:34:52.012 "superblock": true, 00:34:52.012 "num_base_bdevs": 2, 00:34:52.012 "num_base_bdevs_discovered": 2, 00:34:52.013 "num_base_bdevs_operational": 2, 00:34:52.013 "base_bdevs_list": [ 00:34:52.013 { 00:34:52.013 "name": "pt1", 00:34:52.013 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:52.013 "is_configured": true, 00:34:52.013 "data_offset": 256, 00:34:52.013 "data_size": 7936 00:34:52.013 }, 00:34:52.013 { 00:34:52.013 "name": "pt2", 00:34:52.013 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:52.013 "is_configured": true, 00:34:52.013 "data_offset": 256, 00:34:52.013 "data_size": 7936 00:34:52.013 } 00:34:52.013 ] 00:34:52.013 }' 00:34:52.013 11:28:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:52.013 11:28:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:34:52.947 [2024-05-15 11:28:11.449413] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:34:52.947 "name": "raid_bdev1", 00:34:52.947 "aliases": [ 00:34:52.947 "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0" 00:34:52.947 ], 00:34:52.947 "product_name": "Raid Volume", 00:34:52.947 "block_size": 4096, 00:34:52.947 "num_blocks": 7936, 00:34:52.947 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:52.947 "assigned_rate_limits": { 00:34:52.947 "rw_ios_per_sec": 0, 00:34:52.947 "rw_mbytes_per_sec": 0, 00:34:52.947 "r_mbytes_per_sec": 0, 00:34:52.947 "w_mbytes_per_sec": 0 00:34:52.947 }, 00:34:52.947 "claimed": false, 00:34:52.947 "zoned": false, 00:34:52.947 "supported_io_types": { 00:34:52.947 "read": true, 00:34:52.947 "write": true, 00:34:52.947 "unmap": false, 00:34:52.947 "write_zeroes": true, 00:34:52.947 "flush": false, 00:34:52.947 "reset": true, 00:34:52.947 "compare": false, 00:34:52.947 "compare_and_write": false, 00:34:52.947 "abort": false, 00:34:52.947 "nvme_admin": false, 00:34:52.947 "nvme_io": false 00:34:52.947 }, 00:34:52.947 "memory_domains": [ 00:34:52.947 { 00:34:52.947 "dma_device_id": "system", 00:34:52.947 "dma_device_type": 1 00:34:52.947 }, 00:34:52.947 { 00:34:52.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.947 "dma_device_type": 2 00:34:52.947 }, 00:34:52.947 { 00:34:52.947 "dma_device_id": "system", 00:34:52.947 "dma_device_type": 1 00:34:52.947 }, 00:34:52.947 { 00:34:52.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.947 "dma_device_type": 2 00:34:52.947 } 00:34:52.947 ], 00:34:52.947 "driver_specific": { 00:34:52.947 "raid": { 00:34:52.947 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:52.947 "strip_size_kb": 0, 00:34:52.947 "state": "online", 00:34:52.947 "raid_level": "raid1", 00:34:52.947 "superblock": true, 00:34:52.947 "num_base_bdevs": 2, 00:34:52.947 "num_base_bdevs_discovered": 2, 00:34:52.947 "num_base_bdevs_operational": 2, 00:34:52.947 "base_bdevs_list": [ 00:34:52.947 { 00:34:52.947 "name": "pt1", 00:34:52.947 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:52.947 "is_configured": true, 00:34:52.947 "data_offset": 256, 00:34:52.947 "data_size": 7936 00:34:52.947 }, 00:34:52.947 { 00:34:52.947 "name": "pt2", 00:34:52.947 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:52.947 "is_configured": true, 00:34:52.947 "data_offset": 256, 00:34:52.947 "data_size": 7936 00:34:52.947 } 00:34:52.947 ] 00:34:52.947 } 00:34:52.947 } 00:34:52.947 }' 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:34:52.947 pt2' 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:52.947 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:53.250 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:53.250 "name": "pt1", 00:34:53.250 "aliases": [ 00:34:53.250 "b0b7a73e-656a-588f-a0e9-b402941cb19a" 00:34:53.250 ], 00:34:53.250 "product_name": "passthru", 00:34:53.250 "block_size": 4096, 00:34:53.250 "num_blocks": 8192, 00:34:53.250 "uuid": "b0b7a73e-656a-588f-a0e9-b402941cb19a", 00:34:53.250 "assigned_rate_limits": { 00:34:53.250 "rw_ios_per_sec": 0, 00:34:53.250 "rw_mbytes_per_sec": 0, 00:34:53.250 "r_mbytes_per_sec": 0, 00:34:53.250 "w_mbytes_per_sec": 0 00:34:53.250 }, 00:34:53.250 "claimed": true, 00:34:53.250 "claim_type": "exclusive_write", 00:34:53.250 "zoned": false, 00:34:53.250 "supported_io_types": { 00:34:53.250 "read": true, 00:34:53.250 "write": true, 00:34:53.250 "unmap": true, 00:34:53.250 "write_zeroes": true, 00:34:53.250 "flush": true, 00:34:53.250 "reset": true, 00:34:53.250 "compare": false, 00:34:53.250 "compare_and_write": false, 00:34:53.250 "abort": true, 00:34:53.250 "nvme_admin": false, 00:34:53.250 "nvme_io": false 00:34:53.250 }, 00:34:53.250 "memory_domains": [ 00:34:53.250 { 00:34:53.250 "dma_device_id": "system", 00:34:53.250 "dma_device_type": 1 00:34:53.250 }, 00:34:53.250 { 00:34:53.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:53.251 "dma_device_type": 2 00:34:53.251 } 00:34:53.251 ], 00:34:53.251 "driver_specific": { 00:34:53.251 "passthru": { 00:34:53.251 "name": "pt1", 00:34:53.251 "base_bdev_name": "malloc1" 00:34:53.251 } 00:34:53.251 } 00:34:53.251 }' 00:34:53.251 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:53.251 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:53.251 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:53.251 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:53.509 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:53.509 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:53.509 11:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:53.509 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:53.509 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:53.509 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:53.767 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:53.767 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:53.767 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:34:53.767 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:53.767 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:34:54.026 "name": "pt2", 00:34:54.026 "aliases": [ 00:34:54.026 "85c7825f-029a-57e8-84d4-016ac7265fbb" 00:34:54.026 ], 00:34:54.026 "product_name": "passthru", 00:34:54.026 "block_size": 4096, 00:34:54.026 "num_blocks": 8192, 00:34:54.026 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:54.026 "assigned_rate_limits": { 00:34:54.026 "rw_ios_per_sec": 0, 00:34:54.026 "rw_mbytes_per_sec": 0, 00:34:54.026 "r_mbytes_per_sec": 0, 00:34:54.026 "w_mbytes_per_sec": 0 00:34:54.026 }, 00:34:54.026 "claimed": true, 00:34:54.026 "claim_type": "exclusive_write", 00:34:54.026 "zoned": false, 00:34:54.026 "supported_io_types": { 00:34:54.026 "read": true, 00:34:54.026 "write": true, 00:34:54.026 "unmap": true, 00:34:54.026 "write_zeroes": true, 00:34:54.026 "flush": true, 00:34:54.026 "reset": true, 00:34:54.026 "compare": false, 00:34:54.026 "compare_and_write": false, 00:34:54.026 "abort": true, 00:34:54.026 "nvme_admin": false, 00:34:54.026 "nvme_io": false 00:34:54.026 }, 00:34:54.026 "memory_domains": [ 00:34:54.026 { 00:34:54.026 "dma_device_id": "system", 00:34:54.026 "dma_device_type": 1 00:34:54.026 }, 00:34:54.026 { 00:34:54.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:54.026 "dma_device_type": 2 00:34:54.026 } 00:34:54.026 ], 00:34:54.026 "driver_specific": { 00:34:54.026 "passthru": { 00:34:54.026 "name": "pt2", 00:34:54.026 "base_bdev_name": "malloc2" 00:34:54.026 } 00:34:54.026 } 00:34:54.026 }' 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:54.026 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:54.285 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:34:54.564 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:34:54.564 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:54.564 11:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:54.564 [2024-05-15 11:28:13.169861] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:54.564 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 '!=' fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 ']' 00:34:54.564 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:54.564 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:34:54.564 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:34:54.564 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:54.822 [2024-05-15 11:28:13.413791] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:54.822 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.079 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:55.079 "name": "raid_bdev1", 00:34:55.079 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:55.079 "strip_size_kb": 0, 00:34:55.079 "state": "online", 00:34:55.079 "raid_level": "raid1", 00:34:55.079 "superblock": true, 00:34:55.079 "num_base_bdevs": 2, 00:34:55.079 "num_base_bdevs_discovered": 1, 00:34:55.079 "num_base_bdevs_operational": 1, 00:34:55.079 "base_bdevs_list": [ 00:34:55.079 { 00:34:55.079 "name": null, 00:34:55.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.079 "is_configured": false, 00:34:55.079 "data_offset": 256, 00:34:55.079 "data_size": 7936 00:34:55.079 }, 00:34:55.079 { 00:34:55.079 "name": "pt2", 00:34:55.079 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:55.079 "is_configured": true, 00:34:55.079 "data_offset": 256, 00:34:55.079 "data_size": 7936 00:34:55.079 } 00:34:55.079 ] 00:34:55.079 }' 00:34:55.079 11:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:55.079 11:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:56.011 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:56.267 [2024-05-15 11:28:14.749953] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:56.267 [2024-05-15 11:28:14.749997] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:56.267 [2024-05-15 11:28:14.750068] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:56.267 [2024-05-15 11:28:14.750108] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:56.267 [2024-05-15 11:28:14.750120] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:34:56.267 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.267 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:56.524 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:56.524 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:56.524 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:56.524 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:56.524 11:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:34:56.781 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:57.049 [2024-05-15 11:28:15.466096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:57.049 [2024-05-15 11:28:15.466258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.049 [2024-05-15 11:28:15.466313] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:34:57.049 [2024-05-15 11:28:15.466346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.049 [2024-05-15 11:28:15.468683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.049 [2024-05-15 11:28:15.468746] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:57.049 [2024-05-15 11:28:15.468869] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:57.049 [2024-05-15 11:28:15.468931] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:57.049 [2024-05-15 11:28:15.469030] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:34:57.049 [2024-05-15 11:28:15.469046] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:57.049 [2024-05-15 11:28:15.469140] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:34:57.049 [2024-05-15 11:28:15.469394] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:34:57.049 [2024-05-15 11:28:15.469413] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:34:57.049 [2024-05-15 11:28:15.469548] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:57.050 pt2 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.050 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.308 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:57.308 "name": "raid_bdev1", 00:34:57.308 "uuid": "fbdb86fc-5d86-4de9-b8b8-836fc93a75b0", 00:34:57.308 "strip_size_kb": 0, 00:34:57.308 "state": "online", 00:34:57.308 "raid_level": "raid1", 00:34:57.308 "superblock": true, 00:34:57.308 "num_base_bdevs": 2, 00:34:57.308 "num_base_bdevs_discovered": 1, 00:34:57.308 "num_base_bdevs_operational": 1, 00:34:57.308 "base_bdevs_list": [ 00:34:57.308 { 00:34:57.308 "name": null, 00:34:57.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.308 "is_configured": false, 00:34:57.308 "data_offset": 256, 00:34:57.308 "data_size": 7936 00:34:57.308 }, 00:34:57.308 { 00:34:57.308 "name": "pt2", 00:34:57.308 "uuid": "85c7825f-029a-57e8-84d4-016ac7265fbb", 00:34:57.308 "is_configured": true, 00:34:57.308 "data_offset": 256, 00:34:57.308 "data_size": 7936 00:34:57.308 } 00:34:57.308 ] 00:34:57.308 }' 00:34:57.308 11:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:57.308 11:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:57.875 11:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:34:57.875 11:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:34:57.875 11:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:58.133 [2024-05-15 11:28:16.682398] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # '[' fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 '!=' fbdb86fc-5d86-4de9-b8b8-836fc93a75b0 ']' 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@568 -- # killprocess 73591 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 73591 ']' 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 73591 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73591 00:34:58.133 killing process with pid 73591 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73591' 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 73591 00:34:58.133 11:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 73591 00:34:58.133 [2024-05-15 11:28:16.725934] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:58.133 [2024-05-15 11:28:16.726000] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:58.133 [2024-05-15 11:28:16.726037] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:58.133 [2024-05-15 11:28:16.726048] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:34:58.390 [2024-05-15 11:28:16.889014] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:59.764 11:28:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # return 0 00:34:59.764 00:34:59.764 real 0m15.797s 00:34:59.764 user 0m28.755s 00:34:59.764 sys 0m1.697s 00:34:59.764 ************************************ 00:34:59.764 END TEST raid_superblock_test_4k 00:34:59.764 ************************************ 00:34:59.764 11:28:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:59.764 11:28:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:59.764 11:28:18 bdev_raid -- bdev/bdev_raid.sh@846 -- # '[' '' = true ']' 00:34:59.764 11:28:18 bdev_raid -- bdev/bdev_raid.sh@850 -- # base_malloc_params='-m 32' 00:34:59.764 11:28:18 bdev_raid -- bdev/bdev_raid.sh@851 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:34:59.764 11:28:18 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:34:59.764 11:28:18 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:59.764 11:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:59.765 ************************************ 00:34:59.765 START TEST raid_state_function_test_sb_md_separate 00:34:59.765 ************************************ 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=74076 00:34:59.765 Process raid pid: 74076 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 74076' 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 74076 /var/tmp/spdk-raid.sock 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 74076 ']' 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:59.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:59.765 11:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:59.765 [2024-05-15 11:28:18.394016] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:34:59.765 [2024-05-15 11:28:18.394212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.023 [2024-05-15 11:28:18.560800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.282 [2024-05-15 11:28:18.795855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.540 [2024-05-15 11:28:18.994516] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:00.798 [2024-05-15 11:28:19.405141] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:00.798 [2024-05-15 11:28:19.405215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:00.798 [2024-05-15 11:28:19.405249] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:00.798 [2024-05-15 11:28:19.405268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.798 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.055 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:01.055 "name": "Existed_Raid", 00:35:01.055 "uuid": "7f513508-1fdb-4e33-9a56-2018ffc69a3a", 00:35:01.055 "strip_size_kb": 0, 00:35:01.055 "state": "configuring", 00:35:01.055 "raid_level": "raid1", 00:35:01.055 "superblock": true, 00:35:01.055 "num_base_bdevs": 2, 00:35:01.055 "num_base_bdevs_discovered": 0, 00:35:01.055 "num_base_bdevs_operational": 2, 00:35:01.055 "base_bdevs_list": [ 00:35:01.055 { 00:35:01.055 "name": "BaseBdev1", 00:35:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.055 "is_configured": false, 00:35:01.055 "data_offset": 0, 00:35:01.055 "data_size": 0 00:35:01.055 }, 00:35:01.055 { 00:35:01.055 "name": "BaseBdev2", 00:35:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.055 "is_configured": false, 00:35:01.055 "data_offset": 0, 00:35:01.055 "data_size": 0 00:35:01.055 } 00:35:01.055 ] 00:35:01.055 }' 00:35:01.055 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:01.055 11:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:01.989 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:01.989 [2024-05-15 11:28:20.501217] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:01.989 [2024-05-15 11:28:20.501264] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:35:01.989 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:02.246 [2024-05-15 11:28:20.737295] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:02.246 [2024-05-15 11:28:20.737394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:02.246 [2024-05-15 11:28:20.737411] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:02.246 [2024-05-15 11:28:20.737438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:02.246 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:35:02.503 [2024-05-15 11:28:20.977902] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:02.503 BaseBdev1 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:02.503 11:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:02.760 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:03.017 [ 00:35:03.017 { 00:35:03.017 "name": "BaseBdev1", 00:35:03.017 "aliases": [ 00:35:03.017 "6d35a06d-4036-4459-8950-f032280c39e1" 00:35:03.017 ], 00:35:03.017 "product_name": "Malloc disk", 00:35:03.017 "block_size": 4096, 00:35:03.017 "num_blocks": 8192, 00:35:03.017 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:03.017 "md_size": 32, 00:35:03.017 "md_interleave": false, 00:35:03.017 "dif_type": 0, 00:35:03.017 "assigned_rate_limits": { 00:35:03.017 "rw_ios_per_sec": 0, 00:35:03.017 "rw_mbytes_per_sec": 0, 00:35:03.017 "r_mbytes_per_sec": 0, 00:35:03.017 "w_mbytes_per_sec": 0 00:35:03.017 }, 00:35:03.017 "claimed": true, 00:35:03.017 "claim_type": "exclusive_write", 00:35:03.017 "zoned": false, 00:35:03.017 "supported_io_types": { 00:35:03.017 "read": true, 00:35:03.017 "write": true, 00:35:03.017 "unmap": true, 00:35:03.017 "write_zeroes": true, 00:35:03.017 "flush": true, 00:35:03.017 "reset": true, 00:35:03.017 "compare": false, 00:35:03.017 "compare_and_write": false, 00:35:03.017 "abort": true, 00:35:03.017 "nvme_admin": false, 00:35:03.017 "nvme_io": false 00:35:03.017 }, 00:35:03.017 "memory_domains": [ 00:35:03.017 { 00:35:03.017 "dma_device_id": "system", 00:35:03.017 "dma_device_type": 1 00:35:03.017 }, 00:35:03.017 { 00:35:03.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.017 "dma_device_type": 2 00:35:03.017 } 00:35:03.017 ], 00:35:03.017 "driver_specific": {} 00:35:03.017 } 00:35:03.017 ] 00:35:03.017 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:03.018 "name": "Existed_Raid", 00:35:03.018 "uuid": "2b9dbf25-aa94-4969-b988-a5aac100ebba", 00:35:03.018 "strip_size_kb": 0, 00:35:03.018 "state": "configuring", 00:35:03.018 "raid_level": "raid1", 00:35:03.018 "superblock": true, 00:35:03.018 "num_base_bdevs": 2, 00:35:03.018 "num_base_bdevs_discovered": 1, 00:35:03.018 "num_base_bdevs_operational": 2, 00:35:03.018 "base_bdevs_list": [ 00:35:03.018 { 00:35:03.018 "name": "BaseBdev1", 00:35:03.018 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:03.018 "is_configured": true, 00:35:03.018 "data_offset": 256, 00:35:03.018 "data_size": 7936 00:35:03.018 }, 00:35:03.018 { 00:35:03.018 "name": "BaseBdev2", 00:35:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.018 "is_configured": false, 00:35:03.018 "data_offset": 0, 00:35:03.018 "data_size": 0 00:35:03.018 } 00:35:03.018 ] 00:35:03.018 }' 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:03.018 11:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:03.952 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:03.952 [2024-05-15 11:28:22.442116] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:03.952 [2024-05-15 11:28:22.442170] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:35:03.952 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:04.211 [2024-05-15 11:28:22.694266] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:04.211 [2024-05-15 11:28:22.697659] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:04.211 [2024-05-15 11:28:22.697719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.211 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.469 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:04.469 "name": "Existed_Raid", 00:35:04.469 "uuid": "24d19a41-0a9a-4369-903c-dd02895278f2", 00:35:04.469 "strip_size_kb": 0, 00:35:04.469 "state": "configuring", 00:35:04.469 "raid_level": "raid1", 00:35:04.469 "superblock": true, 00:35:04.469 "num_base_bdevs": 2, 00:35:04.469 "num_base_bdevs_discovered": 1, 00:35:04.469 "num_base_bdevs_operational": 2, 00:35:04.469 "base_bdevs_list": [ 00:35:04.469 { 00:35:04.469 "name": "BaseBdev1", 00:35:04.469 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:04.469 "is_configured": true, 00:35:04.469 "data_offset": 256, 00:35:04.469 "data_size": 7936 00:35:04.469 }, 00:35:04.469 { 00:35:04.469 "name": "BaseBdev2", 00:35:04.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.469 "is_configured": false, 00:35:04.469 "data_offset": 0, 00:35:04.469 "data_size": 0 00:35:04.469 } 00:35:04.469 ] 00:35:04.469 }' 00:35:04.469 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:04.469 11:28:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:05.044 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:35:05.303 BaseBdev2 00:35:05.303 [2024-05-15 11:28:23.766861] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:05.303 [2024-05-15 11:28:23.767013] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:35:05.303 [2024-05-15 11:28:23.767028] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:05.303 [2024-05-15 11:28:23.767137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:35:05.303 [2024-05-15 11:28:23.767221] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:35:05.303 [2024-05-15 11:28:23.767234] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:35:05.303 [2024-05-15 11:28:23.767303] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:05.303 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:05.562 11:28:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:05.562 [ 00:35:05.562 { 00:35:05.562 "name": "BaseBdev2", 00:35:05.562 "aliases": [ 00:35:05.562 "649776bf-5435-4ee5-9624-1934f7e49315" 00:35:05.562 ], 00:35:05.562 "product_name": "Malloc disk", 00:35:05.562 "block_size": 4096, 00:35:05.562 "num_blocks": 8192, 00:35:05.562 "uuid": "649776bf-5435-4ee5-9624-1934f7e49315", 00:35:05.562 "md_size": 32, 00:35:05.562 "md_interleave": false, 00:35:05.562 "dif_type": 0, 00:35:05.562 "assigned_rate_limits": { 00:35:05.562 "rw_ios_per_sec": 0, 00:35:05.562 "rw_mbytes_per_sec": 0, 00:35:05.562 "r_mbytes_per_sec": 0, 00:35:05.562 "w_mbytes_per_sec": 0 00:35:05.562 }, 00:35:05.562 "claimed": true, 00:35:05.562 "claim_type": "exclusive_write", 00:35:05.562 "zoned": false, 00:35:05.562 "supported_io_types": { 00:35:05.562 "read": true, 00:35:05.562 "write": true, 00:35:05.562 "unmap": true, 00:35:05.562 "write_zeroes": true, 00:35:05.562 "flush": true, 00:35:05.562 "reset": true, 00:35:05.562 "compare": false, 00:35:05.562 "compare_and_write": false, 00:35:05.562 "abort": true, 00:35:05.562 "nvme_admin": false, 00:35:05.562 "nvme_io": false 00:35:05.562 }, 00:35:05.562 "memory_domains": [ 00:35:05.562 { 00:35:05.562 "dma_device_id": "system", 00:35:05.562 "dma_device_type": 1 00:35:05.562 }, 00:35:05.562 { 00:35:05.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.562 "dma_device_type": 2 00:35:05.562 } 00:35:05.562 ], 00:35:05.562 "driver_specific": {} 00:35:05.562 } 00:35:05.562 ] 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.562 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.821 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:05.821 "name": "Existed_Raid", 00:35:05.821 "uuid": "24d19a41-0a9a-4369-903c-dd02895278f2", 00:35:05.821 "strip_size_kb": 0, 00:35:05.821 "state": "online", 00:35:05.821 "raid_level": "raid1", 00:35:05.821 "superblock": true, 00:35:05.821 "num_base_bdevs": 2, 00:35:05.821 "num_base_bdevs_discovered": 2, 00:35:05.821 "num_base_bdevs_operational": 2, 00:35:05.821 "base_bdevs_list": [ 00:35:05.821 { 00:35:05.821 "name": "BaseBdev1", 00:35:05.821 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:05.821 "is_configured": true, 00:35:05.821 "data_offset": 256, 00:35:05.821 "data_size": 7936 00:35:05.821 }, 00:35:05.821 { 00:35:05.821 "name": "BaseBdev2", 00:35:05.821 "uuid": "649776bf-5435-4ee5-9624-1934f7e49315", 00:35:05.821 "is_configured": true, 00:35:05.821 "data_offset": 256, 00:35:05.821 "data_size": 7936 00:35:05.821 } 00:35:05.821 ] 00:35:05.821 }' 00:35:05.821 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:05.821 11:28:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:06.754 [2024-05-15 11:28:25.311316] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:06.754 "name": "Existed_Raid", 00:35:06.754 "aliases": [ 00:35:06.754 "24d19a41-0a9a-4369-903c-dd02895278f2" 00:35:06.754 ], 00:35:06.754 "product_name": "Raid Volume", 00:35:06.754 "block_size": 4096, 00:35:06.754 "num_blocks": 7936, 00:35:06.754 "uuid": "24d19a41-0a9a-4369-903c-dd02895278f2", 00:35:06.754 "md_size": 32, 00:35:06.754 "md_interleave": false, 00:35:06.754 "dif_type": 0, 00:35:06.754 "assigned_rate_limits": { 00:35:06.754 "rw_ios_per_sec": 0, 00:35:06.754 "rw_mbytes_per_sec": 0, 00:35:06.754 "r_mbytes_per_sec": 0, 00:35:06.754 "w_mbytes_per_sec": 0 00:35:06.754 }, 00:35:06.754 "claimed": false, 00:35:06.754 "zoned": false, 00:35:06.754 "supported_io_types": { 00:35:06.754 "read": true, 00:35:06.754 "write": true, 00:35:06.754 "unmap": false, 00:35:06.754 "write_zeroes": true, 00:35:06.754 "flush": false, 00:35:06.754 "reset": true, 00:35:06.754 "compare": false, 00:35:06.754 "compare_and_write": false, 00:35:06.754 "abort": false, 00:35:06.754 "nvme_admin": false, 00:35:06.754 "nvme_io": false 00:35:06.754 }, 00:35:06.754 "memory_domains": [ 00:35:06.754 { 00:35:06.754 "dma_device_id": "system", 00:35:06.754 "dma_device_type": 1 00:35:06.754 }, 00:35:06.754 { 00:35:06.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.754 "dma_device_type": 2 00:35:06.754 }, 00:35:06.754 { 00:35:06.754 "dma_device_id": "system", 00:35:06.754 "dma_device_type": 1 00:35:06.754 }, 00:35:06.754 { 00:35:06.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.754 "dma_device_type": 2 00:35:06.754 } 00:35:06.754 ], 00:35:06.754 "driver_specific": { 00:35:06.754 "raid": { 00:35:06.754 "uuid": "24d19a41-0a9a-4369-903c-dd02895278f2", 00:35:06.754 "strip_size_kb": 0, 00:35:06.754 "state": "online", 00:35:06.754 "raid_level": "raid1", 00:35:06.754 "superblock": true, 00:35:06.754 "num_base_bdevs": 2, 00:35:06.754 "num_base_bdevs_discovered": 2, 00:35:06.754 "num_base_bdevs_operational": 2, 00:35:06.754 "base_bdevs_list": [ 00:35:06.754 { 00:35:06.754 "name": "BaseBdev1", 00:35:06.754 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:06.754 "is_configured": true, 00:35:06.754 "data_offset": 256, 00:35:06.754 "data_size": 7936 00:35:06.754 }, 00:35:06.754 { 00:35:06.754 "name": "BaseBdev2", 00:35:06.754 "uuid": "649776bf-5435-4ee5-9624-1934f7e49315", 00:35:06.754 "is_configured": true, 00:35:06.754 "data_offset": 256, 00:35:06.754 "data_size": 7936 00:35:06.754 } 00:35:06.754 ] 00:35:06.754 } 00:35:06.754 } 00:35:06.754 }' 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:35:06.754 BaseBdev2' 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:06.754 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:07.011 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:07.011 "name": "BaseBdev1", 00:35:07.011 "aliases": [ 00:35:07.011 "6d35a06d-4036-4459-8950-f032280c39e1" 00:35:07.011 ], 00:35:07.011 "product_name": "Malloc disk", 00:35:07.011 "block_size": 4096, 00:35:07.011 "num_blocks": 8192, 00:35:07.011 "uuid": "6d35a06d-4036-4459-8950-f032280c39e1", 00:35:07.011 "md_size": 32, 00:35:07.011 "md_interleave": false, 00:35:07.011 "dif_type": 0, 00:35:07.011 "assigned_rate_limits": { 00:35:07.011 "rw_ios_per_sec": 0, 00:35:07.011 "rw_mbytes_per_sec": 0, 00:35:07.011 "r_mbytes_per_sec": 0, 00:35:07.011 "w_mbytes_per_sec": 0 00:35:07.011 }, 00:35:07.011 "claimed": true, 00:35:07.011 "claim_type": "exclusive_write", 00:35:07.011 "zoned": false, 00:35:07.011 "supported_io_types": { 00:35:07.011 "read": true, 00:35:07.011 "write": true, 00:35:07.011 "unmap": true, 00:35:07.011 "write_zeroes": true, 00:35:07.011 "flush": true, 00:35:07.011 "reset": true, 00:35:07.011 "compare": false, 00:35:07.011 "compare_and_write": false, 00:35:07.011 "abort": true, 00:35:07.011 "nvme_admin": false, 00:35:07.011 "nvme_io": false 00:35:07.011 }, 00:35:07.011 "memory_domains": [ 00:35:07.011 { 00:35:07.011 "dma_device_id": "system", 00:35:07.011 "dma_device_type": 1 00:35:07.011 }, 00:35:07.011 { 00:35:07.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.011 "dma_device_type": 2 00:35:07.011 } 00:35:07.011 ], 00:35:07.011 "driver_specific": {} 00:35:07.011 }' 00:35:07.011 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:07.268 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:07.527 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:07.527 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:07.527 11:28:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:07.527 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:07.527 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:07.527 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:07.527 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:07.527 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:07.785 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:07.785 "name": "BaseBdev2", 00:35:07.785 "aliases": [ 00:35:07.785 "649776bf-5435-4ee5-9624-1934f7e49315" 00:35:07.785 ], 00:35:07.785 "product_name": "Malloc disk", 00:35:07.785 "block_size": 4096, 00:35:07.785 "num_blocks": 8192, 00:35:07.785 "uuid": "649776bf-5435-4ee5-9624-1934f7e49315", 00:35:07.785 "md_size": 32, 00:35:07.785 "md_interleave": false, 00:35:07.785 "dif_type": 0, 00:35:07.785 "assigned_rate_limits": { 00:35:07.785 "rw_ios_per_sec": 0, 00:35:07.785 "rw_mbytes_per_sec": 0, 00:35:07.785 "r_mbytes_per_sec": 0, 00:35:07.785 "w_mbytes_per_sec": 0 00:35:07.785 }, 00:35:07.785 "claimed": true, 00:35:07.785 "claim_type": "exclusive_write", 00:35:07.785 "zoned": false, 00:35:07.785 "supported_io_types": { 00:35:07.785 "read": true, 00:35:07.785 "write": true, 00:35:07.785 "unmap": true, 00:35:07.785 "write_zeroes": true, 00:35:07.785 "flush": true, 00:35:07.785 "reset": true, 00:35:07.785 "compare": false, 00:35:07.785 "compare_and_write": false, 00:35:07.785 "abort": true, 00:35:07.785 "nvme_admin": false, 00:35:07.785 "nvme_io": false 00:35:07.785 }, 00:35:07.785 "memory_domains": [ 00:35:07.785 { 00:35:07.785 "dma_device_id": "system", 00:35:07.785 "dma_device_type": 1 00:35:07.785 }, 00:35:07.785 { 00:35:07.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.785 "dma_device_type": 2 00:35:07.785 } 00:35:07.785 ], 00:35:07.785 "driver_specific": {} 00:35:07.785 }' 00:35:07.785 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:07.785 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:07.785 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:07.785 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:08.043 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:08.302 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:08.302 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:08.302 [2024-05-15 11:28:26.887452] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.560 11:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.818 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:08.818 "name": "Existed_Raid", 00:35:08.818 "uuid": "24d19a41-0a9a-4369-903c-dd02895278f2", 00:35:08.818 "strip_size_kb": 0, 00:35:08.818 "state": "online", 00:35:08.818 "raid_level": "raid1", 00:35:08.818 "superblock": true, 00:35:08.818 "num_base_bdevs": 2, 00:35:08.818 "num_base_bdevs_discovered": 1, 00:35:08.818 "num_base_bdevs_operational": 1, 00:35:08.818 "base_bdevs_list": [ 00:35:08.818 { 00:35:08.818 "name": null, 00:35:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.818 "is_configured": false, 00:35:08.818 "data_offset": 256, 00:35:08.818 "data_size": 7936 00:35:08.818 }, 00:35:08.818 { 00:35:08.818 "name": "BaseBdev2", 00:35:08.818 "uuid": "649776bf-5435-4ee5-9624-1934f7e49315", 00:35:08.818 "is_configured": true, 00:35:08.818 "data_offset": 256, 00:35:08.818 "data_size": 7936 00:35:08.818 } 00:35:08.818 ] 00:35:08.818 }' 00:35:08.818 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:08.818 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:09.383 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:09.383 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:09.383 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.383 11:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:35:09.664 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:35:09.664 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:09.664 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:09.922 [2024-05-15 11:28:28.316561] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:09.922 [2024-05-15 11:28:28.316644] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:09.922 [2024-05-15 11:28:28.431417] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:09.922 [2024-05-15 11:28:28.431520] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:09.922 [2024-05-15 11:28:28.431540] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:35:09.922 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:09.922 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:09.922 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:35:09.922 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 74076 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 74076 ']' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 74076 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74076 00:35:10.180 killing process with pid 74076 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74076' 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 74076 00:35:10.180 11:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 74076 00:35:10.180 [2024-05-15 11:28:28.702250] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:10.180 [2024-05-15 11:28:28.702356] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:11.556 11:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:35:11.556 00:35:11.556 real 0m11.640s 00:35:11.556 user 0m20.655s 00:35:11.556 sys 0m1.177s 00:35:11.556 11:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:11.556 ************************************ 00:35:11.556 END TEST raid_state_function_test_sb_md_separate 00:35:11.556 ************************************ 00:35:11.556 11:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:11.556 11:28:29 bdev_raid -- bdev/bdev_raid.sh@852 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:35:11.556 11:28:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:35:11.556 11:28:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:11.556 11:28:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:11.556 ************************************ 00:35:11.556 START TEST raid_superblock_test_md_separate 00:35:11.556 ************************************ 00:35:11.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=74449 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 74449 /var/tmp/spdk-raid.sock 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 74449 ']' 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:11.556 11:28:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:11.556 [2024-05-15 11:28:30.084773] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:35:11.556 [2024-05-15 11:28:30.084989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74449 ] 00:35:11.815 [2024-05-15 11:28:30.249375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.074 [2024-05-15 11:28:30.498620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.074 [2024-05-15 11:28:30.699332] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:12.332 11:28:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:35:12.590 malloc1 00:35:12.590 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:12.849 [2024-05-15 11:28:31.302355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:12.849 [2024-05-15 11:28:31.302422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:12.849 [2024-05-15 11:28:31.302471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:35:12.849 [2024-05-15 11:28:31.302508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:12.849 [2024-05-15 11:28:31.305023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:12.849 [2024-05-15 11:28:31.305118] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:12.849 pt1 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:12.849 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:35:13.107 malloc2 00:35:13.107 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:13.366 [2024-05-15 11:28:31.787354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:13.366 [2024-05-15 11:28:31.787462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:13.366 [2024-05-15 11:28:31.787513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:35:13.366 [2024-05-15 11:28:31.787553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:13.366 [2024-05-15 11:28:31.789298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:13.366 [2024-05-15 11:28:31.789342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:13.366 pt2 00:35:13.366 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:13.366 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:13.366 11:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:13.624 [2024-05-15 11:28:32.015450] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:13.624 [2024-05-15 11:28:32.018641] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:13.624 [2024-05-15 11:28:32.018822] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:35:13.624 [2024-05-15 11:28:32.018851] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:13.624 [2024-05-15 11:28:32.018980] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:35:13.624 [2024-05-15 11:28:32.019065] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:35:13.624 [2024-05-15 11:28:32.019078] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:35:13.624 [2024-05-15 11:28:32.019157] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:13.624 "name": "raid_bdev1", 00:35:13.624 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:13.624 "strip_size_kb": 0, 00:35:13.624 "state": "online", 00:35:13.624 "raid_level": "raid1", 00:35:13.624 "superblock": true, 00:35:13.624 "num_base_bdevs": 2, 00:35:13.624 "num_base_bdevs_discovered": 2, 00:35:13.624 "num_base_bdevs_operational": 2, 00:35:13.624 "base_bdevs_list": [ 00:35:13.624 { 00:35:13.624 "name": "pt1", 00:35:13.624 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:13.624 "is_configured": true, 00:35:13.624 "data_offset": 256, 00:35:13.624 "data_size": 7936 00:35:13.624 }, 00:35:13.624 { 00:35:13.624 "name": "pt2", 00:35:13.624 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:13.624 "is_configured": true, 00:35:13.624 "data_offset": 256, 00:35:13.624 "data_size": 7936 00:35:13.624 } 00:35:13.624 ] 00:35:13.624 }' 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:13.624 11:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:14.191 11:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:14.477 [2024-05-15 11:28:33.027698] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:14.477 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:14.477 "name": "raid_bdev1", 00:35:14.477 "aliases": [ 00:35:14.477 "8f959e7f-4486-42b6-a49f-22731fa81523" 00:35:14.477 ], 00:35:14.477 "product_name": "Raid Volume", 00:35:14.477 "block_size": 4096, 00:35:14.477 "num_blocks": 7936, 00:35:14.477 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:14.478 "md_size": 32, 00:35:14.478 "md_interleave": false, 00:35:14.478 "dif_type": 0, 00:35:14.478 "assigned_rate_limits": { 00:35:14.478 "rw_ios_per_sec": 0, 00:35:14.478 "rw_mbytes_per_sec": 0, 00:35:14.478 "r_mbytes_per_sec": 0, 00:35:14.478 "w_mbytes_per_sec": 0 00:35:14.478 }, 00:35:14.478 "claimed": false, 00:35:14.478 "zoned": false, 00:35:14.478 "supported_io_types": { 00:35:14.478 "read": true, 00:35:14.478 "write": true, 00:35:14.478 "unmap": false, 00:35:14.478 "write_zeroes": true, 00:35:14.478 "flush": false, 00:35:14.478 "reset": true, 00:35:14.478 "compare": false, 00:35:14.478 "compare_and_write": false, 00:35:14.478 "abort": false, 00:35:14.478 "nvme_admin": false, 00:35:14.478 "nvme_io": false 00:35:14.478 }, 00:35:14.478 "memory_domains": [ 00:35:14.478 { 00:35:14.478 "dma_device_id": "system", 00:35:14.478 "dma_device_type": 1 00:35:14.478 }, 00:35:14.478 { 00:35:14.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.478 "dma_device_type": 2 00:35:14.478 }, 00:35:14.478 { 00:35:14.478 "dma_device_id": "system", 00:35:14.478 "dma_device_type": 1 00:35:14.478 }, 00:35:14.478 { 00:35:14.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.478 "dma_device_type": 2 00:35:14.478 } 00:35:14.478 ], 00:35:14.478 "driver_specific": { 00:35:14.478 "raid": { 00:35:14.478 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:14.478 "strip_size_kb": 0, 00:35:14.478 "state": "online", 00:35:14.478 "raid_level": "raid1", 00:35:14.478 "superblock": true, 00:35:14.478 "num_base_bdevs": 2, 00:35:14.478 "num_base_bdevs_discovered": 2, 00:35:14.478 "num_base_bdevs_operational": 2, 00:35:14.478 "base_bdevs_list": [ 00:35:14.478 { 00:35:14.478 "name": "pt1", 00:35:14.478 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:14.478 "is_configured": true, 00:35:14.478 "data_offset": 256, 00:35:14.478 "data_size": 7936 00:35:14.478 }, 00:35:14.478 { 00:35:14.478 "name": "pt2", 00:35:14.478 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:14.478 "is_configured": true, 00:35:14.478 "data_offset": 256, 00:35:14.478 "data_size": 7936 00:35:14.478 } 00:35:14.478 ] 00:35:14.478 } 00:35:14.478 } 00:35:14.478 }' 00:35:14.478 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:14.478 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:35:14.478 pt2' 00:35:14.478 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:14.478 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:14.478 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:14.750 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:14.750 "name": "pt1", 00:35:14.750 "aliases": [ 00:35:14.750 "063a8944-87bd-574c-a081-239f36617286" 00:35:14.750 ], 00:35:14.750 "product_name": "passthru", 00:35:14.750 "block_size": 4096, 00:35:14.750 "num_blocks": 8192, 00:35:14.750 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:14.750 "md_size": 32, 00:35:14.750 "md_interleave": false, 00:35:14.750 "dif_type": 0, 00:35:14.750 "assigned_rate_limits": { 00:35:14.750 "rw_ios_per_sec": 0, 00:35:14.750 "rw_mbytes_per_sec": 0, 00:35:14.750 "r_mbytes_per_sec": 0, 00:35:14.750 "w_mbytes_per_sec": 0 00:35:14.750 }, 00:35:14.750 "claimed": true, 00:35:14.750 "claim_type": "exclusive_write", 00:35:14.750 "zoned": false, 00:35:14.750 "supported_io_types": { 00:35:14.750 "read": true, 00:35:14.750 "write": true, 00:35:14.751 "unmap": true, 00:35:14.751 "write_zeroes": true, 00:35:14.751 "flush": true, 00:35:14.751 "reset": true, 00:35:14.751 "compare": false, 00:35:14.751 "compare_and_write": false, 00:35:14.751 "abort": true, 00:35:14.751 "nvme_admin": false, 00:35:14.751 "nvme_io": false 00:35:14.751 }, 00:35:14.751 "memory_domains": [ 00:35:14.751 { 00:35:14.751 "dma_device_id": "system", 00:35:14.751 "dma_device_type": 1 00:35:14.751 }, 00:35:14.751 { 00:35:14.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.751 "dma_device_type": 2 00:35:14.751 } 00:35:14.751 ], 00:35:14.751 "driver_specific": { 00:35:14.751 "passthru": { 00:35:14.751 "name": "pt1", 00:35:14.751 "base_bdev_name": "malloc1" 00:35:14.751 } 00:35:14.751 } 00:35:14.751 }' 00:35:14.751 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:15.009 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:15.267 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:15.267 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:15.267 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:15.267 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:15.267 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:15.526 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:15.526 "name": "pt2", 00:35:15.526 "aliases": [ 00:35:15.526 "32cd023b-d4af-5870-8d76-38eac8725dbe" 00:35:15.526 ], 00:35:15.526 "product_name": "passthru", 00:35:15.526 "block_size": 4096, 00:35:15.526 "num_blocks": 8192, 00:35:15.526 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:15.526 "md_size": 32, 00:35:15.526 "md_interleave": false, 00:35:15.526 "dif_type": 0, 00:35:15.526 "assigned_rate_limits": { 00:35:15.526 "rw_ios_per_sec": 0, 00:35:15.526 "rw_mbytes_per_sec": 0, 00:35:15.526 "r_mbytes_per_sec": 0, 00:35:15.526 "w_mbytes_per_sec": 0 00:35:15.526 }, 00:35:15.526 "claimed": true, 00:35:15.526 "claim_type": "exclusive_write", 00:35:15.526 "zoned": false, 00:35:15.526 "supported_io_types": { 00:35:15.526 "read": true, 00:35:15.526 "write": true, 00:35:15.526 "unmap": true, 00:35:15.526 "write_zeroes": true, 00:35:15.526 "flush": true, 00:35:15.526 "reset": true, 00:35:15.526 "compare": false, 00:35:15.526 "compare_and_write": false, 00:35:15.526 "abort": true, 00:35:15.526 "nvme_admin": false, 00:35:15.526 "nvme_io": false 00:35:15.526 }, 00:35:15.526 "memory_domains": [ 00:35:15.526 { 00:35:15.526 "dma_device_id": "system", 00:35:15.526 "dma_device_type": 1 00:35:15.526 }, 00:35:15.526 { 00:35:15.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.526 "dma_device_type": 2 00:35:15.526 } 00:35:15.526 ], 00:35:15.526 "driver_specific": { 00:35:15.526 "passthru": { 00:35:15.526 "name": "pt2", 00:35:15.526 "base_bdev_name": "malloc2" 00:35:15.526 } 00:35:15.526 } 00:35:15.526 }' 00:35:15.526 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:15.526 11:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:15.526 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:15.526 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:15.526 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:15.526 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:15.526 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:15.784 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:16.042 [2024-05-15 11:28:34.615949] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:16.042 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8f959e7f-4486-42b6-a49f-22731fa81523 00:35:16.042 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8f959e7f-4486-42b6-a49f-22731fa81523 ']' 00:35:16.042 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:16.302 [2024-05-15 11:28:34.855831] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:16.302 [2024-05-15 11:28:34.855871] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:16.302 [2024-05-15 11:28:34.855958] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:16.302 [2024-05-15 11:28:34.856011] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:16.302 [2024-05-15 11:28:34.856025] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:35:16.302 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:16.302 11:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.562 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:16.562 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:16.562 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:16.562 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:16.820 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:16.820 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:17.078 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:17.078 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:17.337 11:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:17.594 [2024-05-15 11:28:36.119991] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:17.594 [2024-05-15 11:28:36.121566] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:17.594 [2024-05-15 11:28:36.121624] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:17.594 [2024-05-15 11:28:36.121697] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:17.594 [2024-05-15 11:28:36.121737] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:17.594 [2024-05-15 11:28:36.121750] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:35:17.594 request: 00:35:17.594 { 00:35:17.594 "name": "raid_bdev1", 00:35:17.594 "raid_level": "raid1", 00:35:17.594 "base_bdevs": [ 00:35:17.594 "malloc1", 00:35:17.594 "malloc2" 00:35:17.594 ], 00:35:17.595 "superblock": false, 00:35:17.595 "method": "bdev_raid_create", 00:35:17.595 "req_id": 1 00:35:17.595 } 00:35:17.595 Got JSON-RPC error response 00:35:17.595 response: 00:35:17.595 { 00:35:17.595 "code": -17, 00:35:17.595 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:17.595 } 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:17.595 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:17.852 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:17.852 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:17.852 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:18.110 [2024-05-15 11:28:36.600007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:18.111 [2024-05-15 11:28:36.600114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.111 [2024-05-15 11:28:36.600163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:35:18.111 [2024-05-15 11:28:36.600196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.111 [2024-05-15 11:28:36.601862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.111 [2024-05-15 11:28:36.601915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:18.111 [2024-05-15 11:28:36.602003] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:35:18.111 [2024-05-15 11:28:36.602074] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:18.111 pt1 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.111 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.370 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:18.370 "name": "raid_bdev1", 00:35:18.370 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:18.370 "strip_size_kb": 0, 00:35:18.370 "state": "configuring", 00:35:18.370 "raid_level": "raid1", 00:35:18.370 "superblock": true, 00:35:18.370 "num_base_bdevs": 2, 00:35:18.370 "num_base_bdevs_discovered": 1, 00:35:18.370 "num_base_bdevs_operational": 2, 00:35:18.370 "base_bdevs_list": [ 00:35:18.370 { 00:35:18.370 "name": "pt1", 00:35:18.370 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:18.370 "is_configured": true, 00:35:18.370 "data_offset": 256, 00:35:18.370 "data_size": 7936 00:35:18.370 }, 00:35:18.370 { 00:35:18.370 "name": null, 00:35:18.370 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:18.370 "is_configured": false, 00:35:18.370 "data_offset": 256, 00:35:18.370 "data_size": 7936 00:35:18.370 } 00:35:18.370 ] 00:35:18.370 }' 00:35:18.370 11:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:18.370 11:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:18.948 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:35:18.948 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:18.948 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:18.948 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:19.219 [2024-05-15 11:28:37.712169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:19.219 [2024-05-15 11:28:37.712265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.219 [2024-05-15 11:28:37.712317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:35:19.219 [2024-05-15 11:28:37.712345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.219 [2024-05-15 11:28:37.712539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.219 [2024-05-15 11:28:37.712574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:19.219 [2024-05-15 11:28:37.712660] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:19.219 [2024-05-15 11:28:37.712684] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:19.219 [2024-05-15 11:28:37.712743] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:35:19.219 [2024-05-15 11:28:37.712754] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:19.219 [2024-05-15 11:28:37.712997] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:35:19.219 [2024-05-15 11:28:37.713078] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:35:19.219 [2024-05-15 11:28:37.713090] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:35:19.219 [2024-05-15 11:28:37.713168] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.219 pt2 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.219 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.478 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:19.478 "name": "raid_bdev1", 00:35:19.478 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:19.478 "strip_size_kb": 0, 00:35:19.478 "state": "online", 00:35:19.478 "raid_level": "raid1", 00:35:19.478 "superblock": true, 00:35:19.478 "num_base_bdevs": 2, 00:35:19.478 "num_base_bdevs_discovered": 2, 00:35:19.478 "num_base_bdevs_operational": 2, 00:35:19.478 "base_bdevs_list": [ 00:35:19.478 { 00:35:19.478 "name": "pt1", 00:35:19.478 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:19.478 "is_configured": true, 00:35:19.478 "data_offset": 256, 00:35:19.478 "data_size": 7936 00:35:19.478 }, 00:35:19.478 { 00:35:19.478 "name": "pt2", 00:35:19.478 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:19.478 "is_configured": true, 00:35:19.478 "data_offset": 256, 00:35:19.478 "data_size": 7936 00:35:19.478 } 00:35:19.478 ] 00:35:19.478 }' 00:35:19.478 11:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:19.478 11:28:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:20.414 [2024-05-15 11:28:38.896481] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:20.414 "name": "raid_bdev1", 00:35:20.414 "aliases": [ 00:35:20.414 "8f959e7f-4486-42b6-a49f-22731fa81523" 00:35:20.414 ], 00:35:20.414 "product_name": "Raid Volume", 00:35:20.414 "block_size": 4096, 00:35:20.414 "num_blocks": 7936, 00:35:20.414 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:20.414 "md_size": 32, 00:35:20.414 "md_interleave": false, 00:35:20.414 "dif_type": 0, 00:35:20.414 "assigned_rate_limits": { 00:35:20.414 "rw_ios_per_sec": 0, 00:35:20.414 "rw_mbytes_per_sec": 0, 00:35:20.414 "r_mbytes_per_sec": 0, 00:35:20.414 "w_mbytes_per_sec": 0 00:35:20.414 }, 00:35:20.414 "claimed": false, 00:35:20.414 "zoned": false, 00:35:20.414 "supported_io_types": { 00:35:20.414 "read": true, 00:35:20.414 "write": true, 00:35:20.414 "unmap": false, 00:35:20.414 "write_zeroes": true, 00:35:20.414 "flush": false, 00:35:20.414 "reset": true, 00:35:20.414 "compare": false, 00:35:20.414 "compare_and_write": false, 00:35:20.414 "abort": false, 00:35:20.414 "nvme_admin": false, 00:35:20.414 "nvme_io": false 00:35:20.414 }, 00:35:20.414 "memory_domains": [ 00:35:20.414 { 00:35:20.414 "dma_device_id": "system", 00:35:20.414 "dma_device_type": 1 00:35:20.414 }, 00:35:20.414 { 00:35:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.414 "dma_device_type": 2 00:35:20.414 }, 00:35:20.414 { 00:35:20.414 "dma_device_id": "system", 00:35:20.414 "dma_device_type": 1 00:35:20.414 }, 00:35:20.414 { 00:35:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.414 "dma_device_type": 2 00:35:20.414 } 00:35:20.414 ], 00:35:20.414 "driver_specific": { 00:35:20.414 "raid": { 00:35:20.414 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:20.414 "strip_size_kb": 0, 00:35:20.414 "state": "online", 00:35:20.414 "raid_level": "raid1", 00:35:20.414 "superblock": true, 00:35:20.414 "num_base_bdevs": 2, 00:35:20.414 "num_base_bdevs_discovered": 2, 00:35:20.414 "num_base_bdevs_operational": 2, 00:35:20.414 "base_bdevs_list": [ 00:35:20.414 { 00:35:20.414 "name": "pt1", 00:35:20.414 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:20.414 "is_configured": true, 00:35:20.414 "data_offset": 256, 00:35:20.414 "data_size": 7936 00:35:20.414 }, 00:35:20.414 { 00:35:20.414 "name": "pt2", 00:35:20.414 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:20.414 "is_configured": true, 00:35:20.414 "data_offset": 256, 00:35:20.414 "data_size": 7936 00:35:20.414 } 00:35:20.414 ] 00:35:20.414 } 00:35:20.414 } 00:35:20.414 }' 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:35:20.414 pt2' 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:20.414 11:28:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:20.672 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:20.672 "name": "pt1", 00:35:20.672 "aliases": [ 00:35:20.672 "063a8944-87bd-574c-a081-239f36617286" 00:35:20.672 ], 00:35:20.672 "product_name": "passthru", 00:35:20.672 "block_size": 4096, 00:35:20.672 "num_blocks": 8192, 00:35:20.672 "uuid": "063a8944-87bd-574c-a081-239f36617286", 00:35:20.672 "md_size": 32, 00:35:20.672 "md_interleave": false, 00:35:20.672 "dif_type": 0, 00:35:20.672 "assigned_rate_limits": { 00:35:20.672 "rw_ios_per_sec": 0, 00:35:20.672 "rw_mbytes_per_sec": 0, 00:35:20.672 "r_mbytes_per_sec": 0, 00:35:20.672 "w_mbytes_per_sec": 0 00:35:20.672 }, 00:35:20.672 "claimed": true, 00:35:20.672 "claim_type": "exclusive_write", 00:35:20.672 "zoned": false, 00:35:20.672 "supported_io_types": { 00:35:20.672 "read": true, 00:35:20.672 "write": true, 00:35:20.672 "unmap": true, 00:35:20.672 "write_zeroes": true, 00:35:20.672 "flush": true, 00:35:20.672 "reset": true, 00:35:20.672 "compare": false, 00:35:20.672 "compare_and_write": false, 00:35:20.672 "abort": true, 00:35:20.672 "nvme_admin": false, 00:35:20.672 "nvme_io": false 00:35:20.672 }, 00:35:20.672 "memory_domains": [ 00:35:20.672 { 00:35:20.672 "dma_device_id": "system", 00:35:20.672 "dma_device_type": 1 00:35:20.672 }, 00:35:20.672 { 00:35:20.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.672 "dma_device_type": 2 00:35:20.672 } 00:35:20.672 ], 00:35:20.672 "driver_specific": { 00:35:20.672 "passthru": { 00:35:20.672 "name": "pt1", 00:35:20.672 "base_bdev_name": "malloc1" 00:35:20.672 } 00:35:20.672 } 00:35:20.672 }' 00:35:20.672 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:20.672 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:20.931 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:21.189 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:21.448 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:21.448 "name": "pt2", 00:35:21.448 "aliases": [ 00:35:21.448 "32cd023b-d4af-5870-8d76-38eac8725dbe" 00:35:21.448 ], 00:35:21.448 "product_name": "passthru", 00:35:21.448 "block_size": 4096, 00:35:21.448 "num_blocks": 8192, 00:35:21.448 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:21.448 "md_size": 32, 00:35:21.448 "md_interleave": false, 00:35:21.448 "dif_type": 0, 00:35:21.448 "assigned_rate_limits": { 00:35:21.448 "rw_ios_per_sec": 0, 00:35:21.448 "rw_mbytes_per_sec": 0, 00:35:21.448 "r_mbytes_per_sec": 0, 00:35:21.448 "w_mbytes_per_sec": 0 00:35:21.448 }, 00:35:21.448 "claimed": true, 00:35:21.448 "claim_type": "exclusive_write", 00:35:21.448 "zoned": false, 00:35:21.448 "supported_io_types": { 00:35:21.448 "read": true, 00:35:21.448 "write": true, 00:35:21.448 "unmap": true, 00:35:21.448 "write_zeroes": true, 00:35:21.448 "flush": true, 00:35:21.448 "reset": true, 00:35:21.448 "compare": false, 00:35:21.448 "compare_and_write": false, 00:35:21.448 "abort": true, 00:35:21.448 "nvme_admin": false, 00:35:21.448 "nvme_io": false 00:35:21.448 }, 00:35:21.448 "memory_domains": [ 00:35:21.448 { 00:35:21.448 "dma_device_id": "system", 00:35:21.448 "dma_device_type": 1 00:35:21.448 }, 00:35:21.448 { 00:35:21.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:21.448 "dma_device_type": 2 00:35:21.448 } 00:35:21.448 ], 00:35:21.448 "driver_specific": { 00:35:21.448 "passthru": { 00:35:21.448 "name": "pt2", 00:35:21.448 "base_bdev_name": "malloc2" 00:35:21.448 } 00:35:21.448 } 00:35:21.448 }' 00:35:21.448 11:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:21.448 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:21.448 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:35:21.448 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:21.706 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:21.964 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:21.964 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:21.964 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:22.223 [2024-05-15 11:28:40.632768] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:22.223 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8f959e7f-4486-42b6-a49f-22731fa81523 '!=' 8f959e7f-4486-42b6-a49f-22731fa81523 ']' 00:35:22.223 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:35:22.223 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:35:22.223 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:35:22.223 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:22.481 [2024-05-15 11:28:40.876656] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.481 11:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.740 11:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:22.740 "name": "raid_bdev1", 00:35:22.740 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:22.740 "strip_size_kb": 0, 00:35:22.740 "state": "online", 00:35:22.740 "raid_level": "raid1", 00:35:22.740 "superblock": true, 00:35:22.740 "num_base_bdevs": 2, 00:35:22.740 "num_base_bdevs_discovered": 1, 00:35:22.740 "num_base_bdevs_operational": 1, 00:35:22.740 "base_bdevs_list": [ 00:35:22.740 { 00:35:22.740 "name": null, 00:35:22.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:22.740 "is_configured": false, 00:35:22.740 "data_offset": 256, 00:35:22.740 "data_size": 7936 00:35:22.740 }, 00:35:22.740 { 00:35:22.740 "name": "pt2", 00:35:22.740 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:22.740 "is_configured": true, 00:35:22.740 "data_offset": 256, 00:35:22.740 "data_size": 7936 00:35:22.740 } 00:35:22.740 ] 00:35:22.740 }' 00:35:22.740 11:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:22.740 11:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:23.310 11:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:23.577 [2024-05-15 11:28:41.992728] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:23.577 [2024-05-15 11:28:41.992768] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:23.577 [2024-05-15 11:28:41.993035] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:23.577 [2024-05-15 11:28:41.993083] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:23.577 [2024-05-15 11:28:41.993095] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:35:23.577 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.577 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:35:23.834 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:35:23.834 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:35:23.834 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:35:23.834 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:23.834 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:35:24.093 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:24.093 [2024-05-15 11:28:42.708850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:24.093 [2024-05-15 11:28:42.708995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:24.093 [2024-05-15 11:28:42.709061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:35:24.093 [2024-05-15 11:28:42.709096] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:24.093 [2024-05-15 11:28:42.710945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:24.093 [2024-05-15 11:28:42.710994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:24.093 [2024-05-15 11:28:42.711103] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:24.093 [2024-05-15 11:28:42.711170] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:24.093 [2024-05-15 11:28:42.711236] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:35:24.093 [2024-05-15 11:28:42.711251] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:24.093 [2024-05-15 11:28:42.711351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:35:24.093 [2024-05-15 11:28:42.711452] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:35:24.093 [2024-05-15 11:28:42.711469] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:35:24.094 [2024-05-15 11:28:42.711558] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:24.094 pt2 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.094 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.350 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:24.350 "name": "raid_bdev1", 00:35:24.350 "uuid": "8f959e7f-4486-42b6-a49f-22731fa81523", 00:35:24.350 "strip_size_kb": 0, 00:35:24.350 "state": "online", 00:35:24.350 "raid_level": "raid1", 00:35:24.350 "superblock": true, 00:35:24.350 "num_base_bdevs": 2, 00:35:24.350 "num_base_bdevs_discovered": 1, 00:35:24.350 "num_base_bdevs_operational": 1, 00:35:24.350 "base_bdevs_list": [ 00:35:24.350 { 00:35:24.350 "name": null, 00:35:24.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:24.350 "is_configured": false, 00:35:24.350 "data_offset": 256, 00:35:24.350 "data_size": 7936 00:35:24.350 }, 00:35:24.350 { 00:35:24.350 "name": "pt2", 00:35:24.350 "uuid": "32cd023b-d4af-5870-8d76-38eac8725dbe", 00:35:24.350 "is_configured": true, 00:35:24.350 "data_offset": 256, 00:35:24.350 "data_size": 7936 00:35:24.350 } 00:35:24.350 ] 00:35:24.350 }' 00:35:24.350 11:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:24.350 11:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:35:25.280 [2024-05-15 11:28:43.889114] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # '[' 8f959e7f-4486-42b6-a49f-22731fa81523 '!=' 8f959e7f-4486-42b6-a49f-22731fa81523 ']' 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@568 -- # killprocess 74449 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 74449 ']' 00:35:25.280 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 74449 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74449 00:35:25.539 killing process with pid 74449 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74449' 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 74449 00:35:25.539 11:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 74449 00:35:25.539 [2024-05-15 11:28:43.940748] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:25.539 [2024-05-15 11:28:43.940854] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.539 [2024-05-15 11:28:43.940903] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.539 [2024-05-15 11:28:43.940915] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:35:25.539 [2024-05-15 11:28:44.115233] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:26.913 11:28:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # return 0 00:35:26.913 00:35:26.913 real 0m15.373s 00:35:26.913 user 0m28.088s 00:35:26.913 sys 0m1.598s 00:35:26.913 11:28:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:26.913 11:28:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:26.913 ************************************ 00:35:26.913 END TEST raid_superblock_test_md_separate 00:35:26.913 ************************************ 00:35:26.913 11:28:45 bdev_raid -- bdev/bdev_raid.sh@853 -- # '[' '' = true ']' 00:35:26.913 11:28:45 bdev_raid -- bdev/bdev_raid.sh@857 -- # base_malloc_params='-m 32 -i' 00:35:26.913 11:28:45 bdev_raid -- bdev/bdev_raid.sh@858 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:35:26.913 11:28:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:35:26.913 11:28:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:26.913 11:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:26.913 ************************************ 00:35:26.913 START TEST raid_state_function_test_sb_md_interleaved 00:35:26.913 ************************************ 00:35:26.913 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:35:26.913 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:35:26.914 Process raid pid: 74932 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=74932 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 74932' 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 74932 /var/tmp/spdk-raid.sock 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 74932 ']' 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:26.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:26.914 11:28:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:26.914 [2024-05-15 11:28:45.517592] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:35:26.914 [2024-05-15 11:28:45.517775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.172 [2024-05-15 11:28:45.687285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.430 [2024-05-15 11:28:45.946017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.687 [2024-05-15 11:28:46.149916] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:27.687 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:27.687 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:35:27.687 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:27.945 [2024-05-15 11:28:46.504896] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:27.945 [2024-05-15 11:28:46.504966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:27.945 [2024-05-15 11:28:46.504984] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:27.945 [2024-05-15 11:28:46.505004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.945 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:28.203 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:28.203 "name": "Existed_Raid", 00:35:28.203 "uuid": "f467890a-078e-4fe0-b7b0-de7db502456e", 00:35:28.203 "strip_size_kb": 0, 00:35:28.203 "state": "configuring", 00:35:28.203 "raid_level": "raid1", 00:35:28.203 "superblock": true, 00:35:28.203 "num_base_bdevs": 2, 00:35:28.203 "num_base_bdevs_discovered": 0, 00:35:28.203 "num_base_bdevs_operational": 2, 00:35:28.203 "base_bdevs_list": [ 00:35:28.203 { 00:35:28.203 "name": "BaseBdev1", 00:35:28.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.203 "is_configured": false, 00:35:28.203 "data_offset": 0, 00:35:28.203 "data_size": 0 00:35:28.203 }, 00:35:28.203 { 00:35:28.203 "name": "BaseBdev2", 00:35:28.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.203 "is_configured": false, 00:35:28.203 "data_offset": 0, 00:35:28.203 "data_size": 0 00:35:28.203 } 00:35:28.203 ] 00:35:28.203 }' 00:35:28.203 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:28.203 11:28:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:28.769 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:29.027 [2024-05-15 11:28:47.480879] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:29.027 [2024-05-15 11:28:47.480920] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:35:29.027 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:29.284 [2024-05-15 11:28:47.708938] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:29.284 [2024-05-15 11:28:47.709033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:29.284 [2024-05-15 11:28:47.709083] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:29.284 [2024-05-15 11:28:47.709110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:29.284 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:35:29.542 [2024-05-15 11:28:47.978309] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:29.542 BaseBdev1 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:29.542 11:28:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:29.801 [ 00:35:29.801 { 00:35:29.801 "name": "BaseBdev1", 00:35:29.801 "aliases": [ 00:35:29.801 "728a97f7-b7b6-4b41-a059-d8562ec6d80a" 00:35:29.801 ], 00:35:29.801 "product_name": "Malloc disk", 00:35:29.801 "block_size": 4128, 00:35:29.801 "num_blocks": 8192, 00:35:29.801 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:29.801 "md_size": 32, 00:35:29.801 "md_interleave": true, 00:35:29.801 "dif_type": 0, 00:35:29.801 "assigned_rate_limits": { 00:35:29.801 "rw_ios_per_sec": 0, 00:35:29.801 "rw_mbytes_per_sec": 0, 00:35:29.801 "r_mbytes_per_sec": 0, 00:35:29.801 "w_mbytes_per_sec": 0 00:35:29.801 }, 00:35:29.801 "claimed": true, 00:35:29.801 "claim_type": "exclusive_write", 00:35:29.801 "zoned": false, 00:35:29.801 "supported_io_types": { 00:35:29.801 "read": true, 00:35:29.801 "write": true, 00:35:29.801 "unmap": true, 00:35:29.801 "write_zeroes": true, 00:35:29.801 "flush": true, 00:35:29.801 "reset": true, 00:35:29.801 "compare": false, 00:35:29.801 "compare_and_write": false, 00:35:29.801 "abort": true, 00:35:29.801 "nvme_admin": false, 00:35:29.801 "nvme_io": false 00:35:29.801 }, 00:35:29.801 "memory_domains": [ 00:35:29.801 { 00:35:29.801 "dma_device_id": "system", 00:35:29.801 "dma_device_type": 1 00:35:29.801 }, 00:35:29.801 { 00:35:29.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:29.801 "dma_device_type": 2 00:35:29.801 } 00:35:29.801 ], 00:35:29.801 "driver_specific": {} 00:35:29.801 } 00:35:29.801 ] 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:29.801 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.059 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:30.059 "name": "Existed_Raid", 00:35:30.059 "uuid": "ee0f5e91-ed68-4851-bfa1-27050560adea", 00:35:30.059 "strip_size_kb": 0, 00:35:30.059 "state": "configuring", 00:35:30.059 "raid_level": "raid1", 00:35:30.059 "superblock": true, 00:35:30.059 "num_base_bdevs": 2, 00:35:30.059 "num_base_bdevs_discovered": 1, 00:35:30.059 "num_base_bdevs_operational": 2, 00:35:30.059 "base_bdevs_list": [ 00:35:30.059 { 00:35:30.059 "name": "BaseBdev1", 00:35:30.059 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:30.059 "is_configured": true, 00:35:30.059 "data_offset": 256, 00:35:30.059 "data_size": 7936 00:35:30.059 }, 00:35:30.059 { 00:35:30.059 "name": "BaseBdev2", 00:35:30.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.059 "is_configured": false, 00:35:30.059 "data_offset": 0, 00:35:30.059 "data_size": 0 00:35:30.059 } 00:35:30.059 ] 00:35:30.059 }' 00:35:30.059 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:30.059 11:28:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:30.996 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:30.996 [2024-05-15 11:28:49.470583] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:30.996 [2024-05-15 11:28:49.470646] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:35:30.996 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:31.255 [2024-05-15 11:28:49.658650] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:31.255 [2024-05-15 11:28:49.660234] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:31.255 [2024-05-15 11:28:49.660305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.255 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:31.514 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:31.514 "name": "Existed_Raid", 00:35:31.514 "uuid": "026942cc-6094-439f-aede-d51810304233", 00:35:31.514 "strip_size_kb": 0, 00:35:31.514 "state": "configuring", 00:35:31.514 "raid_level": "raid1", 00:35:31.514 "superblock": true, 00:35:31.514 "num_base_bdevs": 2, 00:35:31.514 "num_base_bdevs_discovered": 1, 00:35:31.514 "num_base_bdevs_operational": 2, 00:35:31.514 "base_bdevs_list": [ 00:35:31.514 { 00:35:31.514 "name": "BaseBdev1", 00:35:31.514 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:31.514 "is_configured": true, 00:35:31.514 "data_offset": 256, 00:35:31.514 "data_size": 7936 00:35:31.514 }, 00:35:31.514 { 00:35:31.514 "name": "BaseBdev2", 00:35:31.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.514 "is_configured": false, 00:35:31.514 "data_offset": 0, 00:35:31.514 "data_size": 0 00:35:31.514 } 00:35:31.514 ] 00:35:31.514 }' 00:35:31.514 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:31.514 11:28:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:32.080 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:35:32.339 [2024-05-15 11:28:50.803597] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:32.339 BaseBdev2 00:35:32.339 [2024-05-15 11:28:50.803785] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:35:32.339 [2024-05-15 11:28:50.804028] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:32.339 [2024-05-15 11:28:50.804110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:35:32.339 [2024-05-15 11:28:50.804186] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:35:32.339 [2024-05-15 11:28:50.804200] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:35:32.339 [2024-05-15 11:28:50.804251] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:32.339 11:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:32.597 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:32.855 [ 00:35:32.855 { 00:35:32.855 "name": "BaseBdev2", 00:35:32.855 "aliases": [ 00:35:32.855 "26bf201a-85c1-42ae-aa1f-5755f6faa825" 00:35:32.855 ], 00:35:32.855 "product_name": "Malloc disk", 00:35:32.855 "block_size": 4128, 00:35:32.855 "num_blocks": 8192, 00:35:32.855 "uuid": "26bf201a-85c1-42ae-aa1f-5755f6faa825", 00:35:32.855 "md_size": 32, 00:35:32.855 "md_interleave": true, 00:35:32.855 "dif_type": 0, 00:35:32.855 "assigned_rate_limits": { 00:35:32.855 "rw_ios_per_sec": 0, 00:35:32.855 "rw_mbytes_per_sec": 0, 00:35:32.855 "r_mbytes_per_sec": 0, 00:35:32.855 "w_mbytes_per_sec": 0 00:35:32.855 }, 00:35:32.855 "claimed": true, 00:35:32.855 "claim_type": "exclusive_write", 00:35:32.855 "zoned": false, 00:35:32.855 "supported_io_types": { 00:35:32.855 "read": true, 00:35:32.855 "write": true, 00:35:32.855 "unmap": true, 00:35:32.855 "write_zeroes": true, 00:35:32.855 "flush": true, 00:35:32.855 "reset": true, 00:35:32.855 "compare": false, 00:35:32.855 "compare_and_write": false, 00:35:32.855 "abort": true, 00:35:32.855 "nvme_admin": false, 00:35:32.855 "nvme_io": false 00:35:32.855 }, 00:35:32.855 "memory_domains": [ 00:35:32.855 { 00:35:32.855 "dma_device_id": "system", 00:35:32.855 "dma_device_type": 1 00:35:32.855 }, 00:35:32.855 { 00:35:32.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:32.855 "dma_device_type": 2 00:35:32.855 } 00:35:32.855 ], 00:35:32.855 "driver_specific": {} 00:35:32.855 } 00:35:32.855 ] 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:32.855 "name": "Existed_Raid", 00:35:32.855 "uuid": "026942cc-6094-439f-aede-d51810304233", 00:35:32.855 "strip_size_kb": 0, 00:35:32.855 "state": "online", 00:35:32.855 "raid_level": "raid1", 00:35:32.855 "superblock": true, 00:35:32.855 "num_base_bdevs": 2, 00:35:32.855 "num_base_bdevs_discovered": 2, 00:35:32.855 "num_base_bdevs_operational": 2, 00:35:32.855 "base_bdevs_list": [ 00:35:32.855 { 00:35:32.855 "name": "BaseBdev1", 00:35:32.855 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:32.855 "is_configured": true, 00:35:32.855 "data_offset": 256, 00:35:32.855 "data_size": 7936 00:35:32.855 }, 00:35:32.855 { 00:35:32.855 "name": "BaseBdev2", 00:35:32.855 "uuid": "26bf201a-85c1-42ae-aa1f-5755f6faa825", 00:35:32.855 "is_configured": true, 00:35:32.855 "data_offset": 256, 00:35:32.855 "data_size": 7936 00:35:32.855 } 00:35:32.855 ] 00:35:32.855 }' 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:32.855 11:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:33.790 [2024-05-15 11:28:52.372146] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:33.790 "name": "Existed_Raid", 00:35:33.790 "aliases": [ 00:35:33.790 "026942cc-6094-439f-aede-d51810304233" 00:35:33.790 ], 00:35:33.790 "product_name": "Raid Volume", 00:35:33.790 "block_size": 4128, 00:35:33.790 "num_blocks": 7936, 00:35:33.790 "uuid": "026942cc-6094-439f-aede-d51810304233", 00:35:33.790 "md_size": 32, 00:35:33.790 "md_interleave": true, 00:35:33.790 "dif_type": 0, 00:35:33.790 "assigned_rate_limits": { 00:35:33.790 "rw_ios_per_sec": 0, 00:35:33.790 "rw_mbytes_per_sec": 0, 00:35:33.790 "r_mbytes_per_sec": 0, 00:35:33.790 "w_mbytes_per_sec": 0 00:35:33.790 }, 00:35:33.790 "claimed": false, 00:35:33.790 "zoned": false, 00:35:33.790 "supported_io_types": { 00:35:33.790 "read": true, 00:35:33.790 "write": true, 00:35:33.790 "unmap": false, 00:35:33.790 "write_zeroes": true, 00:35:33.790 "flush": false, 00:35:33.790 "reset": true, 00:35:33.790 "compare": false, 00:35:33.790 "compare_and_write": false, 00:35:33.790 "abort": false, 00:35:33.790 "nvme_admin": false, 00:35:33.790 "nvme_io": false 00:35:33.790 }, 00:35:33.790 "memory_domains": [ 00:35:33.790 { 00:35:33.790 "dma_device_id": "system", 00:35:33.790 "dma_device_type": 1 00:35:33.790 }, 00:35:33.790 { 00:35:33.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:33.790 "dma_device_type": 2 00:35:33.790 }, 00:35:33.790 { 00:35:33.790 "dma_device_id": "system", 00:35:33.790 "dma_device_type": 1 00:35:33.790 }, 00:35:33.790 { 00:35:33.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:33.790 "dma_device_type": 2 00:35:33.790 } 00:35:33.790 ], 00:35:33.790 "driver_specific": { 00:35:33.790 "raid": { 00:35:33.790 "uuid": "026942cc-6094-439f-aede-d51810304233", 00:35:33.790 "strip_size_kb": 0, 00:35:33.790 "state": "online", 00:35:33.790 "raid_level": "raid1", 00:35:33.790 "superblock": true, 00:35:33.790 "num_base_bdevs": 2, 00:35:33.790 "num_base_bdevs_discovered": 2, 00:35:33.790 "num_base_bdevs_operational": 2, 00:35:33.790 "base_bdevs_list": [ 00:35:33.790 { 00:35:33.790 "name": "BaseBdev1", 00:35:33.790 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:33.790 "is_configured": true, 00:35:33.790 "data_offset": 256, 00:35:33.790 "data_size": 7936 00:35:33.790 }, 00:35:33.790 { 00:35:33.790 "name": "BaseBdev2", 00:35:33.790 "uuid": "26bf201a-85c1-42ae-aa1f-5755f6faa825", 00:35:33.790 "is_configured": true, 00:35:33.790 "data_offset": 256, 00:35:33.790 "data_size": 7936 00:35:33.790 } 00:35:33.790 ] 00:35:33.790 } 00:35:33.790 } 00:35:33.790 }' 00:35:33.790 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:34.049 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:35:34.049 BaseBdev2' 00:35:34.049 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:34.049 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:34.049 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:34.308 "name": "BaseBdev1", 00:35:34.308 "aliases": [ 00:35:34.308 "728a97f7-b7b6-4b41-a059-d8562ec6d80a" 00:35:34.308 ], 00:35:34.308 "product_name": "Malloc disk", 00:35:34.308 "block_size": 4128, 00:35:34.308 "num_blocks": 8192, 00:35:34.308 "uuid": "728a97f7-b7b6-4b41-a059-d8562ec6d80a", 00:35:34.308 "md_size": 32, 00:35:34.308 "md_interleave": true, 00:35:34.308 "dif_type": 0, 00:35:34.308 "assigned_rate_limits": { 00:35:34.308 "rw_ios_per_sec": 0, 00:35:34.308 "rw_mbytes_per_sec": 0, 00:35:34.308 "r_mbytes_per_sec": 0, 00:35:34.308 "w_mbytes_per_sec": 0 00:35:34.308 }, 00:35:34.308 "claimed": true, 00:35:34.308 "claim_type": "exclusive_write", 00:35:34.308 "zoned": false, 00:35:34.308 "supported_io_types": { 00:35:34.308 "read": true, 00:35:34.308 "write": true, 00:35:34.308 "unmap": true, 00:35:34.308 "write_zeroes": true, 00:35:34.308 "flush": true, 00:35:34.308 "reset": true, 00:35:34.308 "compare": false, 00:35:34.308 "compare_and_write": false, 00:35:34.308 "abort": true, 00:35:34.308 "nvme_admin": false, 00:35:34.308 "nvme_io": false 00:35:34.308 }, 00:35:34.308 "memory_domains": [ 00:35:34.308 { 00:35:34.308 "dma_device_id": "system", 00:35:34.308 "dma_device_type": 1 00:35:34.308 }, 00:35:34.308 { 00:35:34.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:34.308 "dma_device_type": 2 00:35:34.308 } 00:35:34.308 ], 00:35:34.308 "driver_specific": {} 00:35:34.308 }' 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:34.308 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:34.566 11:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:34.566 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:34.824 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:34.824 "name": "BaseBdev2", 00:35:34.824 "aliases": [ 00:35:34.824 "26bf201a-85c1-42ae-aa1f-5755f6faa825" 00:35:34.824 ], 00:35:34.824 "product_name": "Malloc disk", 00:35:34.824 "block_size": 4128, 00:35:34.824 "num_blocks": 8192, 00:35:34.824 "uuid": "26bf201a-85c1-42ae-aa1f-5755f6faa825", 00:35:34.824 "md_size": 32, 00:35:34.824 "md_interleave": true, 00:35:34.824 "dif_type": 0, 00:35:34.824 "assigned_rate_limits": { 00:35:34.824 "rw_ios_per_sec": 0, 00:35:34.824 "rw_mbytes_per_sec": 0, 00:35:34.824 "r_mbytes_per_sec": 0, 00:35:34.824 "w_mbytes_per_sec": 0 00:35:34.824 }, 00:35:34.824 "claimed": true, 00:35:34.824 "claim_type": "exclusive_write", 00:35:34.824 "zoned": false, 00:35:34.824 "supported_io_types": { 00:35:34.824 "read": true, 00:35:34.824 "write": true, 00:35:34.824 "unmap": true, 00:35:34.824 "write_zeroes": true, 00:35:34.824 "flush": true, 00:35:34.824 "reset": true, 00:35:34.824 "compare": false, 00:35:34.824 "compare_and_write": false, 00:35:34.824 "abort": true, 00:35:34.824 "nvme_admin": false, 00:35:34.824 "nvme_io": false 00:35:34.824 }, 00:35:34.824 "memory_domains": [ 00:35:34.824 { 00:35:34.824 "dma_device_id": "system", 00:35:34.824 "dma_device_type": 1 00:35:34.824 }, 00:35:34.824 { 00:35:34.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:34.824 "dma_device_type": 2 00:35:34.824 } 00:35:34.824 ], 00:35:34.824 "driver_specific": {} 00:35:34.824 }' 00:35:34.824 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:34.824 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:35.083 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:35.341 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:35.341 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:35.341 11:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:35.598 [2024-05-15 11:28:53.984354] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:35.598 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.599 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.855 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:35.855 "name": "Existed_Raid", 00:35:35.855 "uuid": "026942cc-6094-439f-aede-d51810304233", 00:35:35.855 "strip_size_kb": 0, 00:35:35.855 "state": "online", 00:35:35.855 "raid_level": "raid1", 00:35:35.855 "superblock": true, 00:35:35.855 "num_base_bdevs": 2, 00:35:35.855 "num_base_bdevs_discovered": 1, 00:35:35.855 "num_base_bdevs_operational": 1, 00:35:35.855 "base_bdevs_list": [ 00:35:35.855 { 00:35:35.855 "name": null, 00:35:35.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.855 "is_configured": false, 00:35:35.855 "data_offset": 256, 00:35:35.855 "data_size": 7936 00:35:35.855 }, 00:35:35.855 { 00:35:35.855 "name": "BaseBdev2", 00:35:35.855 "uuid": "26bf201a-85c1-42ae-aa1f-5755f6faa825", 00:35:35.855 "is_configured": true, 00:35:35.855 "data_offset": 256, 00:35:35.855 "data_size": 7936 00:35:35.855 } 00:35:35.855 ] 00:35:35.855 }' 00:35:35.855 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:35.855 11:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:36.421 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:36.421 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:36.421 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.679 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:35:36.679 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:35:36.679 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:36.679 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:36.938 [2024-05-15 11:28:55.469556] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:36.938 [2024-05-15 11:28:55.469656] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:36.938 [2024-05-15 11:28:55.551703] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:36.938 [2024-05-15 11:28:55.551799] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:36.938 [2024-05-15 11:28:55.551988] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:35:36.938 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:36.938 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:36.938 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.938 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 74932 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 74932 ']' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 74932 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74932 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:37.197 killing process with pid 74932 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74932' 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 74932 00:35:37.197 11:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 74932 00:35:37.197 [2024-05-15 11:28:55.831147] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:37.197 [2024-05-15 11:28:55.831250] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:38.573 ************************************ 00:35:38.573 END TEST raid_state_function_test_sb_md_interleaved 00:35:38.573 ************************************ 00:35:38.573 11:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:35:38.573 00:35:38.573 real 0m11.687s 00:35:38.573 user 0m20.787s 00:35:38.573 sys 0m1.176s 00:35:38.573 11:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:38.573 11:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:38.573 11:28:57 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:35:38.573 11:28:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:35:38.573 11:28:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:38.573 11:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:38.573 ************************************ 00:35:38.573 START TEST raid_superblock_test_md_interleaved 00:35:38.573 ************************************ 00:35:38.573 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:35:38.573 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=75311 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 75311 /var/tmp/spdk-raid.sock 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 75311 ']' 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:38.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:38.574 11:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:38.832 [2024-05-15 11:28:57.266531] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:35:38.832 [2024-05-15 11:28:57.267134] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75311 ] 00:35:38.832 [2024-05-15 11:28:57.417922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.091 [2024-05-15 11:28:57.634260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.349 [2024-05-15 11:28:57.834241] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:39.608 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:35:39.867 malloc1 00:35:39.867 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:39.867 [2024-05-15 11:28:58.500051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:39.867 [2024-05-15 11:28:58.500187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:39.867 [2024-05-15 11:28:58.500246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:35:39.867 [2024-05-15 11:28:58.500286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:39.867 [2024-05-15 11:28:58.501953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:39.867 [2024-05-15 11:28:58.501992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:39.867 pt1 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:40.125 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:40.126 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:40.126 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:40.126 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:35:40.126 malloc2 00:35:40.126 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:40.384 [2024-05-15 11:28:58.910363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:40.384 [2024-05-15 11:28:58.910463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.384 [2024-05-15 11:28:58.910515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:35:40.384 [2024-05-15 11:28:58.910554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.384 [2024-05-15 11:28:58.912197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.384 [2024-05-15 11:28:58.912241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:40.384 pt2 00:35:40.384 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:40.384 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:40.384 11:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:40.642 [2024-05-15 11:28:59.098460] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:40.642 [2024-05-15 11:28:59.100139] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:40.642 [2024-05-15 11:28:59.100322] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:35:40.642 [2024-05-15 11:28:59.100339] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:40.642 [2024-05-15 11:28:59.100417] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:35:40.642 [2024-05-15 11:28:59.100474] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:35:40.642 [2024-05-15 11:28:59.100488] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:35:40.642 [2024-05-15 11:28:59.100539] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.642 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.931 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:40.931 "name": "raid_bdev1", 00:35:40.931 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:40.931 "strip_size_kb": 0, 00:35:40.931 "state": "online", 00:35:40.931 "raid_level": "raid1", 00:35:40.931 "superblock": true, 00:35:40.931 "num_base_bdevs": 2, 00:35:40.931 "num_base_bdevs_discovered": 2, 00:35:40.931 "num_base_bdevs_operational": 2, 00:35:40.931 "base_bdevs_list": [ 00:35:40.931 { 00:35:40.931 "name": "pt1", 00:35:40.931 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:40.931 "is_configured": true, 00:35:40.931 "data_offset": 256, 00:35:40.931 "data_size": 7936 00:35:40.931 }, 00:35:40.931 { 00:35:40.931 "name": "pt2", 00:35:40.931 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:40.931 "is_configured": true, 00:35:40.931 "data_offset": 256, 00:35:40.931 "data_size": 7936 00:35:40.931 } 00:35:40.931 ] 00:35:40.931 }' 00:35:40.931 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:40.931 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:41.498 11:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:41.498 [2024-05-15 11:29:00.086658] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:41.498 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:41.498 "name": "raid_bdev1", 00:35:41.498 "aliases": [ 00:35:41.498 "2acee57b-4ab6-45d5-9ff3-66798d91b0a4" 00:35:41.498 ], 00:35:41.498 "product_name": "Raid Volume", 00:35:41.498 "block_size": 4128, 00:35:41.498 "num_blocks": 7936, 00:35:41.498 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:41.498 "md_size": 32, 00:35:41.498 "md_interleave": true, 00:35:41.498 "dif_type": 0, 00:35:41.498 "assigned_rate_limits": { 00:35:41.498 "rw_ios_per_sec": 0, 00:35:41.498 "rw_mbytes_per_sec": 0, 00:35:41.498 "r_mbytes_per_sec": 0, 00:35:41.498 "w_mbytes_per_sec": 0 00:35:41.498 }, 00:35:41.498 "claimed": false, 00:35:41.498 "zoned": false, 00:35:41.498 "supported_io_types": { 00:35:41.498 "read": true, 00:35:41.498 "write": true, 00:35:41.498 "unmap": false, 00:35:41.498 "write_zeroes": true, 00:35:41.498 "flush": false, 00:35:41.498 "reset": true, 00:35:41.498 "compare": false, 00:35:41.498 "compare_and_write": false, 00:35:41.498 "abort": false, 00:35:41.498 "nvme_admin": false, 00:35:41.498 "nvme_io": false 00:35:41.498 }, 00:35:41.498 "memory_domains": [ 00:35:41.498 { 00:35:41.499 "dma_device_id": "system", 00:35:41.499 "dma_device_type": 1 00:35:41.499 }, 00:35:41.499 { 00:35:41.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.499 "dma_device_type": 2 00:35:41.499 }, 00:35:41.499 { 00:35:41.499 "dma_device_id": "system", 00:35:41.499 "dma_device_type": 1 00:35:41.499 }, 00:35:41.499 { 00:35:41.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.499 "dma_device_type": 2 00:35:41.499 } 00:35:41.499 ], 00:35:41.499 "driver_specific": { 00:35:41.499 "raid": { 00:35:41.499 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:41.499 "strip_size_kb": 0, 00:35:41.499 "state": "online", 00:35:41.499 "raid_level": "raid1", 00:35:41.499 "superblock": true, 00:35:41.499 "num_base_bdevs": 2, 00:35:41.499 "num_base_bdevs_discovered": 2, 00:35:41.499 "num_base_bdevs_operational": 2, 00:35:41.499 "base_bdevs_list": [ 00:35:41.499 { 00:35:41.499 "name": "pt1", 00:35:41.499 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:41.499 "is_configured": true, 00:35:41.499 "data_offset": 256, 00:35:41.499 "data_size": 7936 00:35:41.499 }, 00:35:41.499 { 00:35:41.499 "name": "pt2", 00:35:41.499 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:41.499 "is_configured": true, 00:35:41.499 "data_offset": 256, 00:35:41.499 "data_size": 7936 00:35:41.499 } 00:35:41.499 ] 00:35:41.499 } 00:35:41.499 } 00:35:41.499 }' 00:35:41.499 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:35:41.757 pt2' 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:41.757 "name": "pt1", 00:35:41.757 "aliases": [ 00:35:41.757 "af5d2b68-228e-55f5-9553-8ee7950f4377" 00:35:41.757 ], 00:35:41.757 "product_name": "passthru", 00:35:41.757 "block_size": 4128, 00:35:41.757 "num_blocks": 8192, 00:35:41.757 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:41.757 "md_size": 32, 00:35:41.757 "md_interleave": true, 00:35:41.757 "dif_type": 0, 00:35:41.757 "assigned_rate_limits": { 00:35:41.757 "rw_ios_per_sec": 0, 00:35:41.757 "rw_mbytes_per_sec": 0, 00:35:41.757 "r_mbytes_per_sec": 0, 00:35:41.757 "w_mbytes_per_sec": 0 00:35:41.757 }, 00:35:41.757 "claimed": true, 00:35:41.757 "claim_type": "exclusive_write", 00:35:41.757 "zoned": false, 00:35:41.757 "supported_io_types": { 00:35:41.757 "read": true, 00:35:41.757 "write": true, 00:35:41.757 "unmap": true, 00:35:41.757 "write_zeroes": true, 00:35:41.757 "flush": true, 00:35:41.757 "reset": true, 00:35:41.757 "compare": false, 00:35:41.757 "compare_and_write": false, 00:35:41.757 "abort": true, 00:35:41.757 "nvme_admin": false, 00:35:41.757 "nvme_io": false 00:35:41.757 }, 00:35:41.757 "memory_domains": [ 00:35:41.757 { 00:35:41.757 "dma_device_id": "system", 00:35:41.757 "dma_device_type": 1 00:35:41.757 }, 00:35:41.757 { 00:35:41.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.757 "dma_device_type": 2 00:35:41.757 } 00:35:41.757 ], 00:35:41.757 "driver_specific": { 00:35:41.757 "passthru": { 00:35:41.757 "name": "pt1", 00:35:41.757 "base_bdev_name": "malloc1" 00:35:41.757 } 00:35:41.757 } 00:35:41.757 }' 00:35:41.757 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:42.016 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:42.274 11:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:42.533 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:42.533 "name": "pt2", 00:35:42.533 "aliases": [ 00:35:42.533 "e9991454-cb4d-5b5f-82e6-13841628006c" 00:35:42.533 ], 00:35:42.533 "product_name": "passthru", 00:35:42.533 "block_size": 4128, 00:35:42.533 "num_blocks": 8192, 00:35:42.533 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:42.533 "md_size": 32, 00:35:42.533 "md_interleave": true, 00:35:42.533 "dif_type": 0, 00:35:42.533 "assigned_rate_limits": { 00:35:42.533 "rw_ios_per_sec": 0, 00:35:42.533 "rw_mbytes_per_sec": 0, 00:35:42.533 "r_mbytes_per_sec": 0, 00:35:42.533 "w_mbytes_per_sec": 0 00:35:42.533 }, 00:35:42.533 "claimed": true, 00:35:42.533 "claim_type": "exclusive_write", 00:35:42.533 "zoned": false, 00:35:42.533 "supported_io_types": { 00:35:42.533 "read": true, 00:35:42.533 "write": true, 00:35:42.533 "unmap": true, 00:35:42.533 "write_zeroes": true, 00:35:42.533 "flush": true, 00:35:42.533 "reset": true, 00:35:42.533 "compare": false, 00:35:42.533 "compare_and_write": false, 00:35:42.533 "abort": true, 00:35:42.533 "nvme_admin": false, 00:35:42.533 "nvme_io": false 00:35:42.533 }, 00:35:42.533 "memory_domains": [ 00:35:42.533 { 00:35:42.533 "dma_device_id": "system", 00:35:42.533 "dma_device_type": 1 00:35:42.533 }, 00:35:42.533 { 00:35:42.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.533 "dma_device_type": 2 00:35:42.533 } 00:35:42.533 ], 00:35:42.533 "driver_specific": { 00:35:42.533 "passthru": { 00:35:42.533 "name": "pt2", 00:35:42.533 "base_bdev_name": "malloc2" 00:35:42.533 } 00:35:42.533 } 00:35:42.533 }' 00:35:42.533 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:42.533 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:42.533 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:42.533 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:42.791 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:43.049 [2024-05-15 11:29:01.638890] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:43.049 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2acee57b-4ab6-45d5-9ff3-66798d91b0a4 00:35:43.049 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2acee57b-4ab6-45d5-9ff3-66798d91b0a4 ']' 00:35:43.049 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:43.308 [2024-05-15 11:29:01.838740] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:43.308 [2024-05-15 11:29:01.838774] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:43.308 [2024-05-15 11:29:01.839030] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:43.308 [2024-05-15 11:29:01.839085] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:43.308 [2024-05-15 11:29:01.839098] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:35:43.308 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:43.308 11:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.566 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:43.566 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:43.566 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:43.566 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:43.823 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:43.823 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:44.081 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:44.081 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:44.340 11:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:44.599 [2024-05-15 11:29:03.038938] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:44.599 [2024-05-15 11:29:03.040551] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:44.599 [2024-05-15 11:29:03.040606] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:44.599 [2024-05-15 11:29:03.040684] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:44.599 [2024-05-15 11:29:03.040722] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:44.599 [2024-05-15 11:29:03.040735] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:35:44.599 request: 00:35:44.599 { 00:35:44.599 "name": "raid_bdev1", 00:35:44.599 "raid_level": "raid1", 00:35:44.599 "base_bdevs": [ 00:35:44.599 "malloc1", 00:35:44.599 "malloc2" 00:35:44.599 ], 00:35:44.599 "superblock": false, 00:35:44.599 "method": "bdev_raid_create", 00:35:44.599 "req_id": 1 00:35:44.599 } 00:35:44.599 Got JSON-RPC error response 00:35:44.599 response: 00:35:44.599 { 00:35:44.599 "code": -17, 00:35:44.599 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:44.599 } 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.599 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:44.858 [2024-05-15 11:29:03.438959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:44.858 [2024-05-15 11:29:03.439081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:44.858 [2024-05-15 11:29:03.439128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:35:44.858 [2024-05-15 11:29:03.439158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:44.858 [2024-05-15 11:29:03.440787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:44.858 [2024-05-15 11:29:03.440847] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:44.858 [2024-05-15 11:29:03.440906] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:35:44.858 [2024-05-15 11:29:03.440977] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:44.858 pt1 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.858 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.116 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:45.116 "name": "raid_bdev1", 00:35:45.116 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:45.116 "strip_size_kb": 0, 00:35:45.116 "state": "configuring", 00:35:45.116 "raid_level": "raid1", 00:35:45.116 "superblock": true, 00:35:45.116 "num_base_bdevs": 2, 00:35:45.116 "num_base_bdevs_discovered": 1, 00:35:45.116 "num_base_bdevs_operational": 2, 00:35:45.116 "base_bdevs_list": [ 00:35:45.116 { 00:35:45.116 "name": "pt1", 00:35:45.116 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:45.116 "is_configured": true, 00:35:45.116 "data_offset": 256, 00:35:45.116 "data_size": 7936 00:35:45.116 }, 00:35:45.116 { 00:35:45.116 "name": null, 00:35:45.116 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:45.116 "is_configured": false, 00:35:45.116 "data_offset": 256, 00:35:45.116 "data_size": 7936 00:35:45.116 } 00:35:45.116 ] 00:35:45.116 }' 00:35:45.116 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:45.116 11:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:45.682 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:35:45.682 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:45.682 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:45.682 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:45.941 [2024-05-15 11:29:04.531104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:45.941 [2024-05-15 11:29:04.531213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:45.941 [2024-05-15 11:29:04.531263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:35:45.941 [2024-05-15 11:29:04.531293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:45.941 [2024-05-15 11:29:04.531434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:45.941 [2024-05-15 11:29:04.531471] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:45.941 [2024-05-15 11:29:04.531526] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:45.941 [2024-05-15 11:29:04.531557] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:45.941 [2024-05-15 11:29:04.531643] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:35:45.941 [2024-05-15 11:29:04.531657] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:45.941 [2024-05-15 11:29:04.531712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:35:45.941 [2024-05-15 11:29:04.531758] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:35:45.941 [2024-05-15 11:29:04.531769] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:35:45.941 [2024-05-15 11:29:04.532004] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:45.941 pt2 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.941 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.199 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:46.199 "name": "raid_bdev1", 00:35:46.199 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:46.199 "strip_size_kb": 0, 00:35:46.199 "state": "online", 00:35:46.199 "raid_level": "raid1", 00:35:46.199 "superblock": true, 00:35:46.199 "num_base_bdevs": 2, 00:35:46.199 "num_base_bdevs_discovered": 2, 00:35:46.199 "num_base_bdevs_operational": 2, 00:35:46.199 "base_bdevs_list": [ 00:35:46.199 { 00:35:46.199 "name": "pt1", 00:35:46.199 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:46.199 "is_configured": true, 00:35:46.199 "data_offset": 256, 00:35:46.199 "data_size": 7936 00:35:46.199 }, 00:35:46.199 { 00:35:46.199 "name": "pt2", 00:35:46.199 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:46.199 "is_configured": true, 00:35:46.199 "data_offset": 256, 00:35:46.199 "data_size": 7936 00:35:46.199 } 00:35:46.199 ] 00:35:46.199 }' 00:35:46.199 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:46.199 11:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:35:47.135 [2024-05-15 11:29:05.695421] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:35:47.135 "name": "raid_bdev1", 00:35:47.135 "aliases": [ 00:35:47.135 "2acee57b-4ab6-45d5-9ff3-66798d91b0a4" 00:35:47.135 ], 00:35:47.135 "product_name": "Raid Volume", 00:35:47.135 "block_size": 4128, 00:35:47.135 "num_blocks": 7936, 00:35:47.135 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:47.135 "md_size": 32, 00:35:47.135 "md_interleave": true, 00:35:47.135 "dif_type": 0, 00:35:47.135 "assigned_rate_limits": { 00:35:47.135 "rw_ios_per_sec": 0, 00:35:47.135 "rw_mbytes_per_sec": 0, 00:35:47.135 "r_mbytes_per_sec": 0, 00:35:47.135 "w_mbytes_per_sec": 0 00:35:47.135 }, 00:35:47.135 "claimed": false, 00:35:47.135 "zoned": false, 00:35:47.135 "supported_io_types": { 00:35:47.135 "read": true, 00:35:47.135 "write": true, 00:35:47.135 "unmap": false, 00:35:47.135 "write_zeroes": true, 00:35:47.135 "flush": false, 00:35:47.135 "reset": true, 00:35:47.135 "compare": false, 00:35:47.135 "compare_and_write": false, 00:35:47.135 "abort": false, 00:35:47.135 "nvme_admin": false, 00:35:47.135 "nvme_io": false 00:35:47.135 }, 00:35:47.135 "memory_domains": [ 00:35:47.135 { 00:35:47.135 "dma_device_id": "system", 00:35:47.135 "dma_device_type": 1 00:35:47.135 }, 00:35:47.135 { 00:35:47.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.135 "dma_device_type": 2 00:35:47.135 }, 00:35:47.135 { 00:35:47.135 "dma_device_id": "system", 00:35:47.135 "dma_device_type": 1 00:35:47.135 }, 00:35:47.135 { 00:35:47.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.135 "dma_device_type": 2 00:35:47.135 } 00:35:47.135 ], 00:35:47.135 "driver_specific": { 00:35:47.135 "raid": { 00:35:47.135 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:47.135 "strip_size_kb": 0, 00:35:47.135 "state": "online", 00:35:47.135 "raid_level": "raid1", 00:35:47.135 "superblock": true, 00:35:47.135 "num_base_bdevs": 2, 00:35:47.135 "num_base_bdevs_discovered": 2, 00:35:47.135 "num_base_bdevs_operational": 2, 00:35:47.135 "base_bdevs_list": [ 00:35:47.135 { 00:35:47.135 "name": "pt1", 00:35:47.135 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:47.135 "is_configured": true, 00:35:47.135 "data_offset": 256, 00:35:47.135 "data_size": 7936 00:35:47.135 }, 00:35:47.135 { 00:35:47.135 "name": "pt2", 00:35:47.135 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:47.135 "is_configured": true, 00:35:47.135 "data_offset": 256, 00:35:47.135 "data_size": 7936 00:35:47.135 } 00:35:47.135 ] 00:35:47.135 } 00:35:47.135 } 00:35:47.135 }' 00:35:47.135 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:47.394 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:35:47.394 pt2' 00:35:47.394 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:47.395 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:47.395 11:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:47.395 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:47.395 "name": "pt1", 00:35:47.395 "aliases": [ 00:35:47.395 "af5d2b68-228e-55f5-9553-8ee7950f4377" 00:35:47.395 ], 00:35:47.395 "product_name": "passthru", 00:35:47.395 "block_size": 4128, 00:35:47.395 "num_blocks": 8192, 00:35:47.395 "uuid": "af5d2b68-228e-55f5-9553-8ee7950f4377", 00:35:47.395 "md_size": 32, 00:35:47.395 "md_interleave": true, 00:35:47.395 "dif_type": 0, 00:35:47.395 "assigned_rate_limits": { 00:35:47.395 "rw_ios_per_sec": 0, 00:35:47.395 "rw_mbytes_per_sec": 0, 00:35:47.395 "r_mbytes_per_sec": 0, 00:35:47.395 "w_mbytes_per_sec": 0 00:35:47.395 }, 00:35:47.395 "claimed": true, 00:35:47.395 "claim_type": "exclusive_write", 00:35:47.395 "zoned": false, 00:35:47.395 "supported_io_types": { 00:35:47.395 "read": true, 00:35:47.395 "write": true, 00:35:47.395 "unmap": true, 00:35:47.395 "write_zeroes": true, 00:35:47.395 "flush": true, 00:35:47.395 "reset": true, 00:35:47.395 "compare": false, 00:35:47.395 "compare_and_write": false, 00:35:47.395 "abort": true, 00:35:47.395 "nvme_admin": false, 00:35:47.395 "nvme_io": false 00:35:47.395 }, 00:35:47.395 "memory_domains": [ 00:35:47.395 { 00:35:47.395 "dma_device_id": "system", 00:35:47.395 "dma_device_type": 1 00:35:47.395 }, 00:35:47.395 { 00:35:47.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.395 "dma_device_type": 2 00:35:47.395 } 00:35:47.395 ], 00:35:47.395 "driver_specific": { 00:35:47.395 "passthru": { 00:35:47.395 "name": "pt1", 00:35:47.395 "base_bdev_name": "malloc1" 00:35:47.395 } 00:35:47.395 } 00:35:47.395 }' 00:35:47.395 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:47.653 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:47.911 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:35:48.169 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:35:48.169 "name": "pt2", 00:35:48.169 "aliases": [ 00:35:48.169 "e9991454-cb4d-5b5f-82e6-13841628006c" 00:35:48.169 ], 00:35:48.169 "product_name": "passthru", 00:35:48.169 "block_size": 4128, 00:35:48.169 "num_blocks": 8192, 00:35:48.169 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:48.169 "md_size": 32, 00:35:48.169 "md_interleave": true, 00:35:48.169 "dif_type": 0, 00:35:48.169 "assigned_rate_limits": { 00:35:48.169 "rw_ios_per_sec": 0, 00:35:48.169 "rw_mbytes_per_sec": 0, 00:35:48.169 "r_mbytes_per_sec": 0, 00:35:48.169 "w_mbytes_per_sec": 0 00:35:48.169 }, 00:35:48.169 "claimed": true, 00:35:48.169 "claim_type": "exclusive_write", 00:35:48.169 "zoned": false, 00:35:48.169 "supported_io_types": { 00:35:48.169 "read": true, 00:35:48.169 "write": true, 00:35:48.169 "unmap": true, 00:35:48.169 "write_zeroes": true, 00:35:48.169 "flush": true, 00:35:48.169 "reset": true, 00:35:48.169 "compare": false, 00:35:48.169 "compare_and_write": false, 00:35:48.169 "abort": true, 00:35:48.169 "nvme_admin": false, 00:35:48.169 "nvme_io": false 00:35:48.169 }, 00:35:48.169 "memory_domains": [ 00:35:48.169 { 00:35:48.169 "dma_device_id": "system", 00:35:48.169 "dma_device_type": 1 00:35:48.169 }, 00:35:48.169 { 00:35:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.169 "dma_device_type": 2 00:35:48.169 } 00:35:48.169 ], 00:35:48.169 "driver_specific": { 00:35:48.169 "passthru": { 00:35:48.169 "name": "pt2", 00:35:48.169 "base_bdev_name": "malloc2" 00:35:48.169 } 00:35:48.169 } 00:35:48.169 }' 00:35:48.169 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:48.169 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:35:48.169 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:35:48.169 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:48.427 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:35:48.427 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:35:48.427 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:48.427 11:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:35:48.427 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:35:48.427 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:48.686 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:35:48.686 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:35:48.686 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:48.686 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:48.944 [2024-05-15 11:29:07.383753] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:48.944 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2acee57b-4ab6-45d5-9ff3-66798d91b0a4 '!=' 2acee57b-4ab6-45d5-9ff3-66798d91b0a4 ']' 00:35:48.944 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:35:48.944 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:35:48.944 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:35:48.944 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:49.202 [2024-05-15 11:29:07.591662] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:49.202 "name": "raid_bdev1", 00:35:49.202 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:49.202 "strip_size_kb": 0, 00:35:49.202 "state": "online", 00:35:49.202 "raid_level": "raid1", 00:35:49.202 "superblock": true, 00:35:49.202 "num_base_bdevs": 2, 00:35:49.202 "num_base_bdevs_discovered": 1, 00:35:49.202 "num_base_bdevs_operational": 1, 00:35:49.202 "base_bdevs_list": [ 00:35:49.202 { 00:35:49.202 "name": null, 00:35:49.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.202 "is_configured": false, 00:35:49.202 "data_offset": 256, 00:35:49.202 "data_size": 7936 00:35:49.202 }, 00:35:49.202 { 00:35:49.202 "name": "pt2", 00:35:49.202 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:49.202 "is_configured": true, 00:35:49.202 "data_offset": 256, 00:35:49.202 "data_size": 7936 00:35:49.202 } 00:35:49.202 ] 00:35:49.202 }' 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:49.202 11:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:50.135 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:50.135 [2024-05-15 11:29:08.687806] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:50.135 [2024-05-15 11:29:08.687852] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:50.135 [2024-05-15 11:29:08.687923] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:50.135 [2024-05-15 11:29:08.687976] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:50.135 [2024-05-15 11:29:08.687987] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:35:50.135 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:35:50.135 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.393 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:35:50.393 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:35:50.393 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:35:50.393 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:50.393 11:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:35:50.651 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:50.909 [2024-05-15 11:29:09.391893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:50.909 [2024-05-15 11:29:09.392013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:50.909 [2024-05-15 11:29:09.392058] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:35:50.909 [2024-05-15 11:29:09.392086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:50.909 [2024-05-15 11:29:09.393698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:50.909 [2024-05-15 11:29:09.393742] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:50.909 [2024-05-15 11:29:09.393796] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:50.909 [2024-05-15 11:29:09.393883] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:50.909 [2024-05-15 11:29:09.393951] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:35:50.909 [2024-05-15 11:29:09.393963] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:50.909 [2024-05-15 11:29:09.394015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:35:50.909 [2024-05-15 11:29:09.394069] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:35:50.909 [2024-05-15 11:29:09.394083] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:35:50.909 [2024-05-15 11:29:09.394125] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:50.909 pt2 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.910 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.168 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:51.168 "name": "raid_bdev1", 00:35:51.168 "uuid": "2acee57b-4ab6-45d5-9ff3-66798d91b0a4", 00:35:51.168 "strip_size_kb": 0, 00:35:51.168 "state": "online", 00:35:51.168 "raid_level": "raid1", 00:35:51.168 "superblock": true, 00:35:51.168 "num_base_bdevs": 2, 00:35:51.168 "num_base_bdevs_discovered": 1, 00:35:51.168 "num_base_bdevs_operational": 1, 00:35:51.168 "base_bdevs_list": [ 00:35:51.168 { 00:35:51.168 "name": null, 00:35:51.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.168 "is_configured": false, 00:35:51.168 "data_offset": 256, 00:35:51.168 "data_size": 7936 00:35:51.168 }, 00:35:51.168 { 00:35:51.168 "name": "pt2", 00:35:51.168 "uuid": "e9991454-cb4d-5b5f-82e6-13841628006c", 00:35:51.168 "is_configured": true, 00:35:51.168 "data_offset": 256, 00:35:51.168 "data_size": 7936 00:35:51.168 } 00:35:51.168 ] 00:35:51.168 }' 00:35:51.168 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:51.168 11:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:51.734 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:35:51.734 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:51.734 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:35:51.993 [2024-05-15 11:29:10.548174] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # '[' 2acee57b-4ab6-45d5-9ff3-66798d91b0a4 '!=' 2acee57b-4ab6-45d5-9ff3-66798d91b0a4 ']' 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@568 -- # killprocess 75311 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 75311 ']' 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 75311 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75311 00:35:51.993 killing process with pid 75311 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75311' 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 75311 00:35:51.993 [2024-05-15 11:29:10.589895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:51.993 11:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 75311 00:35:51.993 [2024-05-15 11:29:10.589956] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:51.993 [2024-05-15 11:29:10.589991] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:51.993 [2024-05-15 11:29:10.590001] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:35:52.252 [2024-05-15 11:29:10.759008] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:53.628 ************************************ 00:35:53.628 END TEST raid_superblock_test_md_interleaved 00:35:53.628 ************************************ 00:35:53.628 11:29:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # return 0 00:35:53.628 00:35:53.628 real 0m14.879s 00:35:53.628 user 0m27.213s 00:35:53.628 sys 0m1.462s 00:35:53.628 11:29:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:53.628 11:29:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:53.628 11:29:12 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:35:53.628 11:29:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:35:53.628 11:29:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:53.628 11:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:53.628 ************************************ 00:35:53.628 START TEST raid_rebuild_test_sb_md_interleaved 00:35:53.628 ************************************ 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local verify=false 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # echo BaseBdev1 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # echo BaseBdev2 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:35:53.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # local strip_size 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@582 -- # local create_arg 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local data_offset 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # raid_pid=75788 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # waitforlisten 75788 /var/tmp/spdk-raid.sock 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 75788 ']' 00:35:53.628 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:53.629 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:53.629 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:53.629 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:53.629 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:53.629 11:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:53.629 [2024-05-15 11:29:12.199757] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:35:53.629 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:53.629 Zero copy mechanism will not be used. 00:35:53.629 [2024-05-15 11:29:12.199968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75788 ] 00:35:53.887 [2024-05-15 11:29:12.362617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.145 [2024-05-15 11:29:12.605970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.402 [2024-05-15 11:29:12.810180] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:54.402 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:54.402 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:35:54.402 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:35:54.402 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:35:54.660 BaseBdev1_malloc 00:35:54.660 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:54.917 [2024-05-15 11:29:13.502112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:54.917 [2024-05-15 11:29:13.502226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.917 [2024-05-15 11:29:13.502329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:35:54.917 [2024-05-15 11:29:13.502376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.917 [2024-05-15 11:29:13.503998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.917 [2024-05-15 11:29:13.504055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:54.917 BaseBdev1 00:35:54.917 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:35:54.917 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:35:55.175 BaseBdev2_malloc 00:35:55.175 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:55.433 [2024-05-15 11:29:13.933943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:55.433 [2024-05-15 11:29:13.934073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:55.433 [2024-05-15 11:29:13.934139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:35:55.433 [2024-05-15 11:29:13.934181] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:55.433 [2024-05-15 11:29:13.935725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:55.433 [2024-05-15 11:29:13.935769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:55.433 BaseBdev2 00:35:55.433 11:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:35:55.691 spare_malloc 00:35:55.691 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:55.950 spare_delay 00:35:55.950 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@614 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:56.209 [2024-05-15 11:29:14.637187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:56.209 [2024-05-15 11:29:14.637270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.209 [2024-05-15 11:29:14.637324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:35:56.209 [2024-05-15 11:29:14.637381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.209 [2024-05-15 11:29:14.639035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.209 [2024-05-15 11:29:14.639088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:56.209 spare 00:35:56.209 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:35:56.468 [2024-05-15 11:29:14.853396] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:56.468 [2024-05-15 11:29:14.855024] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:56.468 [2024-05-15 11:29:14.855254] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:35:56.468 [2024-05-15 11:29:14.855273] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:56.468 [2024-05-15 11:29:14.855379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:35:56.468 [2024-05-15 11:29:14.855444] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:35:56.468 [2024-05-15 11:29:14.855456] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:35:56.468 [2024-05-15 11:29:14.855519] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.469 11:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.469 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:56.469 "name": "raid_bdev1", 00:35:56.469 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:35:56.469 "strip_size_kb": 0, 00:35:56.469 "state": "online", 00:35:56.469 "raid_level": "raid1", 00:35:56.469 "superblock": true, 00:35:56.469 "num_base_bdevs": 2, 00:35:56.469 "num_base_bdevs_discovered": 2, 00:35:56.469 "num_base_bdevs_operational": 2, 00:35:56.469 "base_bdevs_list": [ 00:35:56.469 { 00:35:56.469 "name": "BaseBdev1", 00:35:56.469 "uuid": "2facdee5-2d70-5a4a-8df8-79df38c17196", 00:35:56.469 "is_configured": true, 00:35:56.469 "data_offset": 256, 00:35:56.469 "data_size": 7936 00:35:56.469 }, 00:35:56.469 { 00:35:56.469 "name": "BaseBdev2", 00:35:56.469 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:35:56.469 "is_configured": true, 00:35:56.469 "data_offset": 256, 00:35:56.469 "data_size": 7936 00:35:56.469 } 00:35:56.469 ] 00:35:56.469 }' 00:35:56.469 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:56.469 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:57.406 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:57.407 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:35:57.407 [2024-05-15 11:29:15.921654] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.407 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:35:57.407 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.407 11:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:57.665 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:35:57.665 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:35:57.665 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@629 -- # '[' false = true ']' 00:35:57.665 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:57.924 [2024-05-15 11:29:16.369549] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.924 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.182 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:58.182 "name": "raid_bdev1", 00:35:58.182 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:35:58.182 "strip_size_kb": 0, 00:35:58.182 "state": "online", 00:35:58.182 "raid_level": "raid1", 00:35:58.182 "superblock": true, 00:35:58.182 "num_base_bdevs": 2, 00:35:58.182 "num_base_bdevs_discovered": 1, 00:35:58.182 "num_base_bdevs_operational": 1, 00:35:58.182 "base_bdevs_list": [ 00:35:58.182 { 00:35:58.182 "name": null, 00:35:58.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.182 "is_configured": false, 00:35:58.182 "data_offset": 256, 00:35:58.182 "data_size": 7936 00:35:58.182 }, 00:35:58.182 { 00:35:58.182 "name": "BaseBdev2", 00:35:58.182 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:35:58.182 "is_configured": true, 00:35:58.182 "data_offset": 256, 00:35:58.182 "data_size": 7936 00:35:58.182 } 00:35:58.182 ] 00:35:58.182 }' 00:35:58.182 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:58.182 11:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:58.749 11:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:59.008 [2024-05-15 11:29:17.537822] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:59.008 [2024-05-15 11:29:17.553676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:35:59.008 [2024-05-15 11:29:17.555187] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:59.008 11:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # sleep 1 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:59.944 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.202 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:00.202 "name": "raid_bdev1", 00:36:00.202 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:00.202 "strip_size_kb": 0, 00:36:00.202 "state": "online", 00:36:00.202 "raid_level": "raid1", 00:36:00.202 "superblock": true, 00:36:00.202 "num_base_bdevs": 2, 00:36:00.202 "num_base_bdevs_discovered": 2, 00:36:00.202 "num_base_bdevs_operational": 2, 00:36:00.202 "process": { 00:36:00.202 "type": "rebuild", 00:36:00.202 "target": "spare", 00:36:00.202 "progress": { 00:36:00.202 "blocks": 3072, 00:36:00.202 "percent": 38 00:36:00.202 } 00:36:00.202 }, 00:36:00.202 "base_bdevs_list": [ 00:36:00.202 { 00:36:00.202 "name": "spare", 00:36:00.202 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:00.202 "is_configured": true, 00:36:00.202 "data_offset": 256, 00:36:00.202 "data_size": 7936 00:36:00.202 }, 00:36:00.202 { 00:36:00.202 "name": "BaseBdev2", 00:36:00.202 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:00.202 "is_configured": true, 00:36:00.202 "data_offset": 256, 00:36:00.202 "data_size": 7936 00:36:00.202 } 00:36:00.202 ] 00:36:00.202 }' 00:36:00.202 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:00.460 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:00.460 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:00.460 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:00.460 11:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:00.719 [2024-05-15 11:29:19.125067] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:00.719 [2024-05-15 11:29:19.164975] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:00.719 [2024-05-15 11:29:19.165071] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.719 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.977 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:00.977 "name": "raid_bdev1", 00:36:00.977 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:00.977 "strip_size_kb": 0, 00:36:00.977 "state": "online", 00:36:00.977 "raid_level": "raid1", 00:36:00.977 "superblock": true, 00:36:00.977 "num_base_bdevs": 2, 00:36:00.977 "num_base_bdevs_discovered": 1, 00:36:00.977 "num_base_bdevs_operational": 1, 00:36:00.977 "base_bdevs_list": [ 00:36:00.977 { 00:36:00.977 "name": null, 00:36:00.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.977 "is_configured": false, 00:36:00.977 "data_offset": 256, 00:36:00.977 "data_size": 7936 00:36:00.977 }, 00:36:00.977 { 00:36:00.977 "name": "BaseBdev2", 00:36:00.977 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:00.977 "is_configured": true, 00:36:00.977 "data_offset": 256, 00:36:00.977 "data_size": 7936 00:36:00.977 } 00:36:00.977 ] 00:36:00.977 }' 00:36:00.977 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:00.977 11:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.543 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.801 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:01.801 "name": "raid_bdev1", 00:36:01.801 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:01.801 "strip_size_kb": 0, 00:36:01.801 "state": "online", 00:36:01.801 "raid_level": "raid1", 00:36:01.801 "superblock": true, 00:36:01.801 "num_base_bdevs": 2, 00:36:01.801 "num_base_bdevs_discovered": 1, 00:36:01.801 "num_base_bdevs_operational": 1, 00:36:01.801 "base_bdevs_list": [ 00:36:01.801 { 00:36:01.801 "name": null, 00:36:01.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.801 "is_configured": false, 00:36:01.801 "data_offset": 256, 00:36:01.801 "data_size": 7936 00:36:01.801 }, 00:36:01.801 { 00:36:01.801 "name": "BaseBdev2", 00:36:01.801 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:01.801 "is_configured": true, 00:36:01.801 "data_offset": 256, 00:36:01.801 "data_size": 7936 00:36:01.801 } 00:36:01.801 ] 00:36:01.801 }' 00:36:01.801 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:01.801 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:01.801 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:02.059 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:02.059 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@667 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:02.059 [2024-05-15 11:29:20.636373] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:02.059 [2024-05-15 11:29:20.651113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:36:02.059 [2024-05-15 11:29:20.652615] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:02.059 11:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # sleep 1 00:36:03.432 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:03.433 "name": "raid_bdev1", 00:36:03.433 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:03.433 "strip_size_kb": 0, 00:36:03.433 "state": "online", 00:36:03.433 "raid_level": "raid1", 00:36:03.433 "superblock": true, 00:36:03.433 "num_base_bdevs": 2, 00:36:03.433 "num_base_bdevs_discovered": 2, 00:36:03.433 "num_base_bdevs_operational": 2, 00:36:03.433 "process": { 00:36:03.433 "type": "rebuild", 00:36:03.433 "target": "spare", 00:36:03.433 "progress": { 00:36:03.433 "blocks": 3072, 00:36:03.433 "percent": 38 00:36:03.433 } 00:36:03.433 }, 00:36:03.433 "base_bdevs_list": [ 00:36:03.433 { 00:36:03.433 "name": "spare", 00:36:03.433 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:03.433 "is_configured": true, 00:36:03.433 "data_offset": 256, 00:36:03.433 "data_size": 7936 00:36:03.433 }, 00:36:03.433 { 00:36:03.433 "name": "BaseBdev2", 00:36:03.433 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:03.433 "is_configured": true, 00:36:03.433 "data_offset": 256, 00:36:03.433 "data_size": 7936 00:36:03.433 } 00:36:03.433 ] 00:36:03.433 }' 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:03.433 11:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:36:03.433 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # local timeout=755 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.433 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.692 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:03.692 "name": "raid_bdev1", 00:36:03.692 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:03.692 "strip_size_kb": 0, 00:36:03.692 "state": "online", 00:36:03.692 "raid_level": "raid1", 00:36:03.692 "superblock": true, 00:36:03.692 "num_base_bdevs": 2, 00:36:03.692 "num_base_bdevs_discovered": 2, 00:36:03.692 "num_base_bdevs_operational": 2, 00:36:03.692 "process": { 00:36:03.692 "type": "rebuild", 00:36:03.692 "target": "spare", 00:36:03.692 "progress": { 00:36:03.692 "blocks": 3840, 00:36:03.692 "percent": 48 00:36:03.692 } 00:36:03.692 }, 00:36:03.692 "base_bdevs_list": [ 00:36:03.692 { 00:36:03.692 "name": "spare", 00:36:03.692 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:03.692 "is_configured": true, 00:36:03.692 "data_offset": 256, 00:36:03.692 "data_size": 7936 00:36:03.692 }, 00:36:03.692 { 00:36:03.692 "name": "BaseBdev2", 00:36:03.692 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:03.692 "is_configured": true, 00:36:03.692 "data_offset": 256, 00:36:03.692 "data_size": 7936 00:36:03.692 } 00:36:03.692 ] 00:36:03.692 }' 00:36:03.692 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:03.692 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:03.692 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:03.950 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:03.950 11:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.932 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:05.190 "name": "raid_bdev1", 00:36:05.190 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:05.190 "strip_size_kb": 0, 00:36:05.190 "state": "online", 00:36:05.190 "raid_level": "raid1", 00:36:05.190 "superblock": true, 00:36:05.190 "num_base_bdevs": 2, 00:36:05.190 "num_base_bdevs_discovered": 2, 00:36:05.190 "num_base_bdevs_operational": 2, 00:36:05.190 "process": { 00:36:05.190 "type": "rebuild", 00:36:05.190 "target": "spare", 00:36:05.190 "progress": { 00:36:05.190 "blocks": 7168, 00:36:05.190 "percent": 90 00:36:05.190 } 00:36:05.190 }, 00:36:05.190 "base_bdevs_list": [ 00:36:05.190 { 00:36:05.190 "name": "spare", 00:36:05.190 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:05.190 "is_configured": true, 00:36:05.190 "data_offset": 256, 00:36:05.190 "data_size": 7936 00:36:05.190 }, 00:36:05.190 { 00:36:05.190 "name": "BaseBdev2", 00:36:05.190 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:05.190 "is_configured": true, 00:36:05.190 "data_offset": 256, 00:36:05.190 "data_size": 7936 00:36:05.190 } 00:36:05.190 ] 00:36:05.190 }' 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:05.190 11:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:36:05.190 [2024-05-15 11:29:23.776160] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:05.190 [2024-05-15 11:29:23.776218] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:05.191 [2024-05-15 11:29:23.776322] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:06.126 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.127 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.385 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:06.385 "name": "raid_bdev1", 00:36:06.385 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:06.385 "strip_size_kb": 0, 00:36:06.385 "state": "online", 00:36:06.385 "raid_level": "raid1", 00:36:06.385 "superblock": true, 00:36:06.385 "num_base_bdevs": 2, 00:36:06.385 "num_base_bdevs_discovered": 2, 00:36:06.385 "num_base_bdevs_operational": 2, 00:36:06.385 "base_bdevs_list": [ 00:36:06.385 { 00:36:06.385 "name": "spare", 00:36:06.385 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:06.385 "is_configured": true, 00:36:06.385 "data_offset": 256, 00:36:06.385 "data_size": 7936 00:36:06.385 }, 00:36:06.385 { 00:36:06.385 "name": "BaseBdev2", 00:36:06.385 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:06.385 "is_configured": true, 00:36:06.385 "data_offset": 256, 00:36:06.385 "data_size": 7936 00:36:06.385 } 00:36:06.385 ] 00:36:06.385 }' 00:36:06.385 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:06.385 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:06.385 11:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # break 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:06.644 "name": "raid_bdev1", 00:36:06.644 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:06.644 "strip_size_kb": 0, 00:36:06.644 "state": "online", 00:36:06.644 "raid_level": "raid1", 00:36:06.644 "superblock": true, 00:36:06.644 "num_base_bdevs": 2, 00:36:06.644 "num_base_bdevs_discovered": 2, 00:36:06.644 "num_base_bdevs_operational": 2, 00:36:06.644 "base_bdevs_list": [ 00:36:06.644 { 00:36:06.644 "name": "spare", 00:36:06.644 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:06.644 "is_configured": true, 00:36:06.644 "data_offset": 256, 00:36:06.644 "data_size": 7936 00:36:06.644 }, 00:36:06.644 { 00:36:06.644 "name": "BaseBdev2", 00:36:06.644 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:06.644 "is_configured": true, 00:36:06.644 "data_offset": 256, 00:36:06.644 "data_size": 7936 00:36:06.644 } 00:36:06.644 ] 00:36:06.644 }' 00:36:06.644 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.945 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.204 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:07.204 "name": "raid_bdev1", 00:36:07.204 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:07.204 "strip_size_kb": 0, 00:36:07.204 "state": "online", 00:36:07.204 "raid_level": "raid1", 00:36:07.204 "superblock": true, 00:36:07.204 "num_base_bdevs": 2, 00:36:07.204 "num_base_bdevs_discovered": 2, 00:36:07.204 "num_base_bdevs_operational": 2, 00:36:07.204 "base_bdevs_list": [ 00:36:07.204 { 00:36:07.204 "name": "spare", 00:36:07.204 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:07.204 "is_configured": true, 00:36:07.204 "data_offset": 256, 00:36:07.204 "data_size": 7936 00:36:07.204 }, 00:36:07.204 { 00:36:07.204 "name": "BaseBdev2", 00:36:07.204 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:07.204 "is_configured": true, 00:36:07.204 "data_offset": 256, 00:36:07.204 "data_size": 7936 00:36:07.204 } 00:36:07.204 ] 00:36:07.204 }' 00:36:07.204 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:07.204 11:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:07.770 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:08.029 [2024-05-15 11:29:26.614302] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:08.029 [2024-05-15 11:29:26.614345] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:08.029 [2024-05-15 11:29:26.614433] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:08.029 [2024-05-15 11:29:26.614483] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:08.029 [2024-05-15 11:29:26.614497] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:36:08.029 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.029 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # jq length 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@727 -- # '[' false = true ']' 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:36:08.287 11:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:08.545 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:08.803 [2024-05-15 11:29:27.288789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:08.803 [2024-05-15 11:29:27.288886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.803 [2024-05-15 11:29:27.288933] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:36:08.803 [2024-05-15 11:29:27.288964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.803 [2024-05-15 11:29:27.290417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.803 [2024-05-15 11:29:27.290478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:08.803 [2024-05-15 11:29:27.290534] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:08.803 [2024-05-15 11:29:27.290616] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:08.803 BaseBdev1 00:36:08.803 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:36:08.803 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:36:08.803 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:36:09.061 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:09.319 [2024-05-15 11:29:27.795803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:09.319 [2024-05-15 11:29:27.795921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:09.319 [2024-05-15 11:29:27.795990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:36:09.319 [2024-05-15 11:29:27.796021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:09.319 [2024-05-15 11:29:27.796199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:09.319 [2024-05-15 11:29:27.796248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:09.319 [2024-05-15 11:29:27.796321] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:36:09.319 [2024-05-15 11:29:27.796337] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:36:09.319 [2024-05-15 11:29:27.796345] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:09.319 [2024-05-15 11:29:27.796366] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:36:09.319 [2024-05-15 11:29:27.796444] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:09.319 BaseBdev2 00:36:09.319 11:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:09.577 [2024-05-15 11:29:28.185500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:09.577 [2024-05-15 11:29:28.185629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:09.577 [2024-05-15 11:29:28.185685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:36:09.577 [2024-05-15 11:29:28.185709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:09.577 [2024-05-15 11:29:28.186087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:09.577 [2024-05-15 11:29:28.186145] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:09.577 [2024-05-15 11:29:28.186209] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:36:09.577 [2024-05-15 11:29:28.186238] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:09.577 spare 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.577 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.836 [2024-05-15 11:29:28.286325] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:36:09.836 [2024-05-15 11:29:28.286375] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:09.836 [2024-05-15 11:29:28.286492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:09.836 [2024-05-15 11:29:28.286609] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:36:09.836 [2024-05-15 11:29:28.286624] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:36:09.836 [2024-05-15 11:29:28.286674] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:09.836 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:09.836 "name": "raid_bdev1", 00:36:09.836 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:09.836 "strip_size_kb": 0, 00:36:09.836 "state": "online", 00:36:09.836 "raid_level": "raid1", 00:36:09.836 "superblock": true, 00:36:09.836 "num_base_bdevs": 2, 00:36:09.836 "num_base_bdevs_discovered": 2, 00:36:09.836 "num_base_bdevs_operational": 2, 00:36:09.836 "base_bdevs_list": [ 00:36:09.836 { 00:36:09.836 "name": "spare", 00:36:09.836 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:09.836 "is_configured": true, 00:36:09.836 "data_offset": 256, 00:36:09.836 "data_size": 7936 00:36:09.836 }, 00:36:09.836 { 00:36:09.836 "name": "BaseBdev2", 00:36:09.836 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:09.836 "is_configured": true, 00:36:09.836 "data_offset": 256, 00:36:09.836 "data_size": 7936 00:36:09.836 } 00:36:09.836 ] 00:36:09.836 }' 00:36:09.836 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:09.836 11:29:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:10.772 "name": "raid_bdev1", 00:36:10.772 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:10.772 "strip_size_kb": 0, 00:36:10.772 "state": "online", 00:36:10.772 "raid_level": "raid1", 00:36:10.772 "superblock": true, 00:36:10.772 "num_base_bdevs": 2, 00:36:10.772 "num_base_bdevs_discovered": 2, 00:36:10.772 "num_base_bdevs_operational": 2, 00:36:10.772 "base_bdevs_list": [ 00:36:10.772 { 00:36:10.772 "name": "spare", 00:36:10.772 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:10.772 "is_configured": true, 00:36:10.772 "data_offset": 256, 00:36:10.772 "data_size": 7936 00:36:10.772 }, 00:36:10.772 { 00:36:10.772 "name": "BaseBdev2", 00:36:10.772 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:10.772 "is_configured": true, 00:36:10.772 "data_offset": 256, 00:36:10.772 "data_size": 7936 00:36:10.772 } 00:36:10.772 ] 00:36:10.772 }' 00:36:10.772 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:11.032 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:11.032 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:11.032 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:11.032 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.032 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:11.291 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:36:11.291 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:11.291 [2024-05-15 11:29:29.916490] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.552 11:29:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.552 11:29:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:11.552 "name": "raid_bdev1", 00:36:11.552 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:11.552 "strip_size_kb": 0, 00:36:11.552 "state": "online", 00:36:11.552 "raid_level": "raid1", 00:36:11.552 "superblock": true, 00:36:11.552 "num_base_bdevs": 2, 00:36:11.552 "num_base_bdevs_discovered": 1, 00:36:11.552 "num_base_bdevs_operational": 1, 00:36:11.552 "base_bdevs_list": [ 00:36:11.552 { 00:36:11.552 "name": null, 00:36:11.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.552 "is_configured": false, 00:36:11.552 "data_offset": 256, 00:36:11.552 "data_size": 7936 00:36:11.552 }, 00:36:11.552 { 00:36:11.552 "name": "BaseBdev2", 00:36:11.552 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:11.552 "is_configured": true, 00:36:11.552 "data_offset": 256, 00:36:11.552 "data_size": 7936 00:36:11.552 } 00:36:11.552 ] 00:36:11.552 }' 00:36:11.552 11:29:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:11.552 11:29:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:12.488 11:29:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:12.488 [2024-05-15 11:29:31.024740] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:12.488 [2024-05-15 11:29:31.024905] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:12.488 [2024-05-15 11:29:31.024923] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:12.488 [2024-05-15 11:29:31.025207] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:12.488 [2024-05-15 11:29:31.039932] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:36:12.488 [2024-05-15 11:29:31.041281] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:12.488 11:29:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # sleep 1 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.423 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.682 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:13.682 "name": "raid_bdev1", 00:36:13.682 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:13.682 "strip_size_kb": 0, 00:36:13.682 "state": "online", 00:36:13.682 "raid_level": "raid1", 00:36:13.682 "superblock": true, 00:36:13.682 "num_base_bdevs": 2, 00:36:13.682 "num_base_bdevs_discovered": 2, 00:36:13.682 "num_base_bdevs_operational": 2, 00:36:13.682 "process": { 00:36:13.682 "type": "rebuild", 00:36:13.682 "target": "spare", 00:36:13.682 "progress": { 00:36:13.682 "blocks": 3072, 00:36:13.682 "percent": 38 00:36:13.682 } 00:36:13.682 }, 00:36:13.682 "base_bdevs_list": [ 00:36:13.682 { 00:36:13.682 "name": "spare", 00:36:13.682 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:13.682 "is_configured": true, 00:36:13.682 "data_offset": 256, 00:36:13.682 "data_size": 7936 00:36:13.682 }, 00:36:13.682 { 00:36:13.682 "name": "BaseBdev2", 00:36:13.682 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:13.682 "is_configured": true, 00:36:13.682 "data_offset": 256, 00:36:13.682 "data_size": 7936 00:36:13.682 } 00:36:13.682 ] 00:36:13.682 }' 00:36:13.682 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:13.941 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:13.941 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:13.941 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:13.941 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:14.199 [2024-05-15 11:29:32.647308] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:14.199 [2024-05-15 11:29:32.651169] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:14.199 [2024-05-15 11:29:32.651255] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:14.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.200 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.457 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:14.458 "name": "raid_bdev1", 00:36:14.458 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:14.458 "strip_size_kb": 0, 00:36:14.458 "state": "online", 00:36:14.458 "raid_level": "raid1", 00:36:14.458 "superblock": true, 00:36:14.458 "num_base_bdevs": 2, 00:36:14.458 "num_base_bdevs_discovered": 1, 00:36:14.458 "num_base_bdevs_operational": 1, 00:36:14.458 "base_bdevs_list": [ 00:36:14.458 { 00:36:14.458 "name": null, 00:36:14.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.458 "is_configured": false, 00:36:14.458 "data_offset": 256, 00:36:14.458 "data_size": 7936 00:36:14.458 }, 00:36:14.458 { 00:36:14.458 "name": "BaseBdev2", 00:36:14.458 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:14.458 "is_configured": true, 00:36:14.458 "data_offset": 256, 00:36:14.458 "data_size": 7936 00:36:14.458 } 00:36:14.458 ] 00:36:14.458 }' 00:36:14.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:14.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:15.022 11:29:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:15.280 [2024-05-15 11:29:33.842389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:15.280 [2024-05-15 11:29:33.842485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.280 [2024-05-15 11:29:33.842554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033380 00:36:15.280 [2024-05-15 11:29:33.842579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.280 [2024-05-15 11:29:33.842761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.280 [2024-05-15 11:29:33.842796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:15.280 [2024-05-15 11:29:33.843027] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:36:15.280 [2024-05-15 11:29:33.843050] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:15.280 [2024-05-15 11:29:33.843060] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:15.280 [2024-05-15 11:29:33.843093] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:15.280 [2024-05-15 11:29:33.857140] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:36:15.280 spare 00:36:15.280 [2024-05-15 11:29:33.858445] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:15.280 11:29:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.654 11:29:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.654 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:16.654 "name": "raid_bdev1", 00:36:16.654 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:16.654 "strip_size_kb": 0, 00:36:16.654 "state": "online", 00:36:16.654 "raid_level": "raid1", 00:36:16.654 "superblock": true, 00:36:16.654 "num_base_bdevs": 2, 00:36:16.654 "num_base_bdevs_discovered": 2, 00:36:16.654 "num_base_bdevs_operational": 2, 00:36:16.654 "process": { 00:36:16.654 "type": "rebuild", 00:36:16.654 "target": "spare", 00:36:16.654 "progress": { 00:36:16.654 "blocks": 3072, 00:36:16.654 "percent": 38 00:36:16.654 } 00:36:16.654 }, 00:36:16.654 "base_bdevs_list": [ 00:36:16.654 { 00:36:16.654 "name": "spare", 00:36:16.654 "uuid": "255abbe5-0ce6-5436-9aad-2e75ec227e32", 00:36:16.654 "is_configured": true, 00:36:16.654 "data_offset": 256, 00:36:16.654 "data_size": 7936 00:36:16.654 }, 00:36:16.654 { 00:36:16.654 "name": "BaseBdev2", 00:36:16.654 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:16.654 "is_configured": true, 00:36:16.654 "data_offset": 256, 00:36:16.654 "data_size": 7936 00:36:16.654 } 00:36:16.654 ] 00:36:16.654 }' 00:36:16.654 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:16.654 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:16.654 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:16.655 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:16.655 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:16.911 [2024-05-15 11:29:35.453246] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:16.911 [2024-05-15 11:29:35.468045] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:16.911 [2024-05-15 11:29:35.468163] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.911 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.169 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:17.169 "name": "raid_bdev1", 00:36:17.169 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:17.169 "strip_size_kb": 0, 00:36:17.169 "state": "online", 00:36:17.169 "raid_level": "raid1", 00:36:17.169 "superblock": true, 00:36:17.169 "num_base_bdevs": 2, 00:36:17.169 "num_base_bdevs_discovered": 1, 00:36:17.169 "num_base_bdevs_operational": 1, 00:36:17.169 "base_bdevs_list": [ 00:36:17.169 { 00:36:17.169 "name": null, 00:36:17.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.169 "is_configured": false, 00:36:17.169 "data_offset": 256, 00:36:17.169 "data_size": 7936 00:36:17.169 }, 00:36:17.169 { 00:36:17.169 "name": "BaseBdev2", 00:36:17.169 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:17.169 "is_configured": true, 00:36:17.169 "data_offset": 256, 00:36:17.169 "data_size": 7936 00:36:17.169 } 00:36:17.169 ] 00:36:17.169 }' 00:36:17.169 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:17.169 11:29:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:18.101 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:18.101 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:18.101 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:18.102 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:18.102 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:18.102 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:18.102 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:18.360 "name": "raid_bdev1", 00:36:18.360 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:18.360 "strip_size_kb": 0, 00:36:18.360 "state": "online", 00:36:18.360 "raid_level": "raid1", 00:36:18.360 "superblock": true, 00:36:18.360 "num_base_bdevs": 2, 00:36:18.360 "num_base_bdevs_discovered": 1, 00:36:18.360 "num_base_bdevs_operational": 1, 00:36:18.360 "base_bdevs_list": [ 00:36:18.360 { 00:36:18.360 "name": null, 00:36:18.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.360 "is_configured": false, 00:36:18.360 "data_offset": 256, 00:36:18.360 "data_size": 7936 00:36:18.360 }, 00:36:18.360 { 00:36:18.360 "name": "BaseBdev2", 00:36:18.360 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:18.360 "is_configured": true, 00:36:18.360 "data_offset": 256, 00:36:18.360 "data_size": 7936 00:36:18.360 } 00:36:18.360 ] 00:36:18.360 }' 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:18.360 11:29:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:18.618 11:29:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:18.877 [2024-05-15 11:29:37.356291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:18.877 [2024-05-15 11:29:37.356420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:18.877 [2024-05-15 11:29:37.356495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035480 00:36:18.877 [2024-05-15 11:29:37.356522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:18.877 [2024-05-15 11:29:37.356657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:18.877 [2024-05-15 11:29:37.356723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:18.877 [2024-05-15 11:29:37.356770] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:18.877 [2024-05-15 11:29:37.356785] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:18.877 [2024-05-15 11:29:37.356792] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:18.877 BaseBdev1 00:36:18.877 11:29:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # sleep 1 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.813 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.072 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:20.072 "name": "raid_bdev1", 00:36:20.072 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:20.072 "strip_size_kb": 0, 00:36:20.072 "state": "online", 00:36:20.072 "raid_level": "raid1", 00:36:20.072 "superblock": true, 00:36:20.072 "num_base_bdevs": 2, 00:36:20.072 "num_base_bdevs_discovered": 1, 00:36:20.072 "num_base_bdevs_operational": 1, 00:36:20.072 "base_bdevs_list": [ 00:36:20.072 { 00:36:20.072 "name": null, 00:36:20.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.072 "is_configured": false, 00:36:20.072 "data_offset": 256, 00:36:20.072 "data_size": 7936 00:36:20.072 }, 00:36:20.072 { 00:36:20.072 "name": "BaseBdev2", 00:36:20.072 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:20.072 "is_configured": true, 00:36:20.072 "data_offset": 256, 00:36:20.072 "data_size": 7936 00:36:20.072 } 00:36:20.072 ] 00:36:20.072 }' 00:36:20.072 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:20.072 11:29:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:21.008 "name": "raid_bdev1", 00:36:21.008 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:21.008 "strip_size_kb": 0, 00:36:21.008 "state": "online", 00:36:21.008 "raid_level": "raid1", 00:36:21.008 "superblock": true, 00:36:21.008 "num_base_bdevs": 2, 00:36:21.008 "num_base_bdevs_discovered": 1, 00:36:21.008 "num_base_bdevs_operational": 1, 00:36:21.008 "base_bdevs_list": [ 00:36:21.008 { 00:36:21.008 "name": null, 00:36:21.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.008 "is_configured": false, 00:36:21.008 "data_offset": 256, 00:36:21.008 "data_size": 7936 00:36:21.008 }, 00:36:21.008 { 00:36:21.008 "name": "BaseBdev2", 00:36:21.008 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:21.008 "is_configured": true, 00:36:21.008 "data_offset": 256, 00:36:21.008 "data_size": 7936 00:36:21.008 } 00:36:21.008 ] 00:36:21.008 }' 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:21.008 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:21.267 [2024-05-15 11:29:39.820802] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:21.267 [2024-05-15 11:29:39.821203] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:21.267 [2024-05-15 11:29:39.821226] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:21.267 request: 00:36:21.267 { 00:36:21.267 "base_bdev": "BaseBdev1", 00:36:21.267 "raid_bdev": "raid_bdev1", 00:36:21.267 "method": "bdev_raid_add_base_bdev", 00:36:21.267 "req_id": 1 00:36:21.267 } 00:36:21.267 Got JSON-RPC error response 00:36:21.267 response: 00:36:21.267 { 00:36:21.267 "code": -22, 00:36:21.267 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:21.267 } 00:36:21.267 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:36:21.267 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:21.267 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:21.267 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:21.267 11:29:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # sleep 1 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.257 11:29:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.515 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:22.515 "name": "raid_bdev1", 00:36:22.515 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:22.515 "strip_size_kb": 0, 00:36:22.515 "state": "online", 00:36:22.515 "raid_level": "raid1", 00:36:22.515 "superblock": true, 00:36:22.515 "num_base_bdevs": 2, 00:36:22.515 "num_base_bdevs_discovered": 1, 00:36:22.515 "num_base_bdevs_operational": 1, 00:36:22.515 "base_bdevs_list": [ 00:36:22.515 { 00:36:22.515 "name": null, 00:36:22.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.515 "is_configured": false, 00:36:22.516 "data_offset": 256, 00:36:22.516 "data_size": 7936 00:36:22.516 }, 00:36:22.516 { 00:36:22.516 "name": "BaseBdev2", 00:36:22.516 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:22.516 "is_configured": true, 00:36:22.516 "data_offset": 256, 00:36:22.516 "data_size": 7936 00:36:22.516 } 00:36:22.516 ] 00:36:22.516 }' 00:36:22.516 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:22.516 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.451 11:29:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.451 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:23.451 "name": "raid_bdev1", 00:36:23.451 "uuid": "71ecf592-0739-4ad0-8709-c8fde13521b3", 00:36:23.451 "strip_size_kb": 0, 00:36:23.451 "state": "online", 00:36:23.451 "raid_level": "raid1", 00:36:23.451 "superblock": true, 00:36:23.451 "num_base_bdevs": 2, 00:36:23.451 "num_base_bdevs_discovered": 1, 00:36:23.451 "num_base_bdevs_operational": 1, 00:36:23.451 "base_bdevs_list": [ 00:36:23.451 { 00:36:23.451 "name": null, 00:36:23.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.451 "is_configured": false, 00:36:23.451 "data_offset": 256, 00:36:23.451 "data_size": 7936 00:36:23.451 }, 00:36:23.451 { 00:36:23.451 "name": "BaseBdev2", 00:36:23.451 "uuid": "e6a104ca-c3c3-5459-a91e-5ba2eab749a9", 00:36:23.451 "is_configured": true, 00:36:23.451 "data_offset": 256, 00:36:23.451 "data_size": 7936 00:36:23.451 } 00:36:23.451 ] 00:36:23.451 }' 00:36:23.451 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # killprocess 75788 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 75788 ']' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 75788 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75788 00:36:23.710 killing process with pid 75788 00:36:23.710 Received shutdown signal, test time was about 60.000000 seconds 00:36:23.710 00:36:23.710 Latency(us) 00:36:23.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.710 =================================================================================================================== 00:36:23.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75788' 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 75788 00:36:23.710 11:29:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 75788 00:36:23.710 [2024-05-15 11:29:42.216417] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:23.710 [2024-05-15 11:29:42.216556] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:23.710 [2024-05-15 11:29:42.216597] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:23.710 [2024-05-15 11:29:42.216609] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:36:23.970 [2024-05-15 11:29:42.474392] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:25.347 ************************************ 00:36:25.347 END TEST raid_rebuild_test_sb_md_interleaved 00:36:25.347 ************************************ 00:36:25.347 11:29:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@797 -- # return 0 00:36:25.347 00:36:25.347 real 0m31.674s 00:36:25.347 user 0m52.022s 00:36:25.347 sys 0m2.395s 00:36:25.347 11:29:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:25.347 11:29:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:25.347 11:29:43 bdev_raid -- bdev/bdev_raid.sh@862 -- # rm -f /raidrandtest 00:36:25.347 ************************************ 00:36:25.347 END TEST bdev_raid 00:36:25.347 ************************************ 00:36:25.347 00:36:25.347 real 12m26.227s 00:36:25.347 user 22m49.620s 00:36:25.347 sys 1m16.432s 00:36:25.347 11:29:43 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:25.347 11:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:25.347 11:29:43 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:36:25.347 11:29:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:25.347 11:29:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:25.347 11:29:43 -- common/autotest_common.sh@10 -- # set +x 00:36:25.347 ************************************ 00:36:25.347 START TEST bdevperf_config 00:36:25.347 ************************************ 00:36:25.347 11:29:43 bdevperf_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:36:25.347 * Looking for test storage... 00:36:25.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:36:25.347 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:25.347 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:25.347 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:25.347 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:25.347 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:25.347 11:29:43 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-15 11:29:44.061898] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:30.612 [2024-05-15 11:29:44.062104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76665 ] 00:36:30.612 Using job config with 4 jobs 00:36:30.612 [2024-05-15 11:29:44.215360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.612 [2024-05-15 11:29:44.438802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.612 cpumask for '\''job0'\'' is too big 00:36:30.612 cpumask for '\''job1'\'' is too big 00:36:30.612 cpumask for '\''job2'\'' is too big 00:36:30.612 cpumask for '\''job3'\'' is too big 00:36:30.612 Running I/O for 2 seconds... 00:36:30.612 00:36:30.612 Latency(us) 00:36:30.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72515.06 70.82 0.00 0.00 3528.07 748.45 5719.51 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72500.35 70.80 0.00 0.00 3525.89 718.66 4915.20 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72550.03 70.85 0.00 0.00 3520.74 722.39 4885.41 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72534.93 70.83 0.00 0.00 3518.71 625.57 4885.41 00:36:30.612 =================================================================================================================== 00:36:30.612 Total : 290100.38 283.30 0.00 0.00 3523.35 625.57 5719.51' 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-15 11:29:44.061898] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:30.612 [2024-05-15 11:29:44.062104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76665 ] 00:36:30.612 Using job config with 4 jobs 00:36:30.612 [2024-05-15 11:29:44.215360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.612 [2024-05-15 11:29:44.438802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.612 cpumask for '\''job0'\'' is too big 00:36:30.612 cpumask for '\''job1'\'' is too big 00:36:30.612 cpumask for '\''job2'\'' is too big 00:36:30.612 cpumask for '\''job3'\'' is too big 00:36:30.612 Running I/O for 2 seconds... 00:36:30.612 00:36:30.612 Latency(us) 00:36:30.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72515.06 70.82 0.00 0.00 3528.07 748.45 5719.51 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72500.35 70.80 0.00 0.00 3525.89 718.66 4915.20 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72550.03 70.85 0.00 0.00 3520.74 722.39 4885.41 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72534.93 70.83 0.00 0.00 3518.71 625.57 4885.41 00:36:30.612 =================================================================================================================== 00:36:30.612 Total : 290100.38 283.30 0.00 0.00 3523.35 625.57 5719.51' 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 11:29:44.061898] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:30.612 [2024-05-15 11:29:44.062104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76665 ] 00:36:30.612 Using job config with 4 jobs 00:36:30.612 [2024-05-15 11:29:44.215360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.612 [2024-05-15 11:29:44.438802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.612 cpumask for '\''job0'\'' is too big 00:36:30.612 cpumask for '\''job1'\'' is too big 00:36:30.612 cpumask for '\''job2'\'' is too big 00:36:30.612 cpumask for '\''job3'\'' is too big 00:36:30.612 Running I/O for 2 seconds... 00:36:30.612 00:36:30.612 Latency(us) 00:36:30.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72515.06 70.82 0.00 0.00 3528.07 748.45 5719.51 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72500.35 70.80 0.00 0.00 3525.89 718.66 4915.20 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72550.03 70.85 0.00 0.00 3520.74 722.39 4885.41 00:36:30.612 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:30.612 Malloc0 : 2.01 72534.93 70.83 0.00 0.00 3518.71 625.57 4885.41 00:36:30.612 =================================================================================================================== 00:36:30.612 Total : 290100.38 283.30 0.00 0.00 3523.35 625.57 5719.51' 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:36:30.612 11:29:48 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:30.612 [2024-05-15 11:29:48.592271] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:30.612 [2024-05-15 11:29:48.592550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76729 ] 00:36:30.612 [2024-05-15 11:29:48.746567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.612 [2024-05-15 11:29:48.963649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.870 cpumask for 'job0' is too big 00:36:30.870 cpumask for 'job1' is too big 00:36:30.870 cpumask for 'job2' is too big 00:36:30.870 cpumask for 'job3' is too big 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:36:35.090 Running I/O for 2 seconds... 00:36:35.090 00:36:35.090 Latency(us) 00:36:35.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.090 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:35.090 Malloc0 : 2.01 70274.10 68.63 0.00 0.00 3640.69 722.39 5868.45 00:36:35.090 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:35.090 Malloc0 : 2.01 70260.06 68.61 0.00 0.00 3638.30 662.81 5153.51 00:36:35.090 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:35.090 Malloc0 : 2.01 70246.52 68.60 0.00 0.00 3636.07 688.87 4498.15 00:36:35.090 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:36:35.090 Malloc0 : 2.01 70232.81 68.59 0.00 0.00 3633.66 677.70 4349.21 00:36:35.090 =================================================================================================================== 00:36:35.090 Total : 281013.50 274.43 0.00 0.00 3637.18 662.81 5868.45' 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:35.090 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:35.090 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:35.090 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:35.090 11:29:52 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-15 11:29:53.123314] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:39.276 [2024-05-15 11:29:53.123497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76787 ] 00:36:39.276 Using job config with 3 jobs 00:36:39.276 [2024-05-15 11:29:53.273788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.276 [2024-05-15 11:29:53.527009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.276 cpumask for '\''job0'\'' is too big 00:36:39.276 cpumask for '\''job1'\'' is too big 00:36:39.276 cpumask for '\''job2'\'' is too big 00:36:39.276 Running I/O for 2 seconds... 00:36:39.276 00:36:39.276 Latency(us) 00:36:39.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.00 92251.79 90.09 0.00 0.00 2772.94 670.25 4051.32 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92272.26 90.11 0.00 0.00 2770.07 685.15 4647.10 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92254.23 90.09 0.00 0.00 2768.49 651.64 4438.57 00:36:39.276 =================================================================================================================== 00:36:39.276 Total : 276778.28 270.29 0.00 0.00 2770.50 651.64 4647.10' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-15 11:29:53.123314] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:39.276 [2024-05-15 11:29:53.123497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76787 ] 00:36:39.276 Using job config with 3 jobs 00:36:39.276 [2024-05-15 11:29:53.273788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.276 [2024-05-15 11:29:53.527009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.276 cpumask for '\''job0'\'' is too big 00:36:39.276 cpumask for '\''job1'\'' is too big 00:36:39.276 cpumask for '\''job2'\'' is too big 00:36:39.276 Running I/O for 2 seconds... 00:36:39.276 00:36:39.276 Latency(us) 00:36:39.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.00 92251.79 90.09 0.00 0.00 2772.94 670.25 4051.32 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92272.26 90.11 0.00 0.00 2770.07 685.15 4647.10 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92254.23 90.09 0.00 0.00 2768.49 651.64 4438.57 00:36:39.276 =================================================================================================================== 00:36:39.276 Total : 276778.28 270.29 0.00 0.00 2770.50 651.64 4647.10' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 11:29:53.123314] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:39.276 [2024-05-15 11:29:53.123497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76787 ] 00:36:39.276 Using job config with 3 jobs 00:36:39.276 [2024-05-15 11:29:53.273788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.276 [2024-05-15 11:29:53.527009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.276 cpumask for '\''job0'\'' is too big 00:36:39.276 cpumask for '\''job1'\'' is too big 00:36:39.276 cpumask for '\''job2'\'' is too big 00:36:39.276 Running I/O for 2 seconds... 00:36:39.276 00:36:39.276 Latency(us) 00:36:39.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.00 92251.79 90.09 0.00 0.00 2772.94 670.25 4051.32 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92272.26 90.11 0.00 0.00 2770.07 685.15 4647.10 00:36:39.276 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:36:39.276 Malloc0 : 2.01 92254.23 90.09 0.00 0.00 2768.49 651.64 4438.57 00:36:39.276 =================================================================================================================== 00:36:39.276 Total : 276778.28 270.29 0.00 0.00 2770.50 651.64 4647.10' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:36:39.276 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:39.276 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:39.276 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:36:39.276 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:36:39.276 11:29:57 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:36:39.277 11:29:57 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:36:39.277 11:29:57 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:36:39.277 00:36:39.277 11:29:57 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:36:39.277 11:29:57 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:36:39.277 11:29:57 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:44.560 11:30:02 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-15 11:29:57.733817] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:44.560 [2024-05-15 11:29:57.734012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76850 ] 00:36:44.560 Using job config with 4 jobs 00:36:44.560 [2024-05-15 11:29:57.899960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.560 [2024-05-15 11:29:58.162213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.560 cpumask for '\''job0'\'' is too big 00:36:44.561 cpumask for '\''job1'\'' is too big 00:36:44.561 cpumask for '\''job2'\'' is too big 00:36:44.561 cpumask for '\''job3'\'' is too big 00:36:44.561 Running I/O for 2 seconds... 00:36:44.561 00:36:44.561 Latency(us) 00:36:44.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.01 34929.38 34.11 0.00 0.00 7325.45 1601.16 11617.75 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.01 34939.70 34.12 0.00 0.00 7320.31 1712.87 11617.75 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34932.80 34.11 0.00 0.00 7313.15 1452.22 10128.29 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34925.00 34.11 0.00 0.00 7311.60 1608.61 10247.45 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34918.19 34.10 0.00 0.00 7304.49 1452.22 9472.93 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34910.61 34.09 0.00 0.00 7302.65 1616.06 9472.93 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34903.82 34.09 0.00 0.00 7295.09 1444.77 9532.51 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34896.18 34.08 0.00 0.00 7294.38 1630.95 9592.09 00:36:44.561 =================================================================================================================== 00:36:44.561 Total : 279355.68 272.81 0.00 0.00 7308.38 1444.77 11617.75' 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-15 11:29:57.733817] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:44.561 [2024-05-15 11:29:57.734012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76850 ] 00:36:44.561 Using job config with 4 jobs 00:36:44.561 [2024-05-15 11:29:57.899960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.561 [2024-05-15 11:29:58.162213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.561 cpumask for '\''job0'\'' is too big 00:36:44.561 cpumask for '\''job1'\'' is too big 00:36:44.561 cpumask for '\''job2'\'' is too big 00:36:44.561 cpumask for '\''job3'\'' is too big 00:36:44.561 Running I/O for 2 seconds... 00:36:44.561 00:36:44.561 Latency(us) 00:36:44.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.01 34929.38 34.11 0.00 0.00 7325.45 1601.16 11617.75 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.01 34939.70 34.12 0.00 0.00 7320.31 1712.87 11617.75 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34932.80 34.11 0.00 0.00 7313.15 1452.22 10128.29 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34925.00 34.11 0.00 0.00 7311.60 1608.61 10247.45 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34918.19 34.10 0.00 0.00 7304.49 1452.22 9472.93 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34910.61 34.09 0.00 0.00 7302.65 1616.06 9472.93 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34903.82 34.09 0.00 0.00 7295.09 1444.77 9532.51 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34896.18 34.08 0.00 0.00 7294.38 1630.95 9592.09 00:36:44.561 =================================================================================================================== 00:36:44.561 Total : 279355.68 272.81 0.00 0.00 7308.38 1444.77 11617.75' 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 11:29:57.733817] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:44.561 [2024-05-15 11:29:57.734012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76850 ] 00:36:44.561 Using job config with 4 jobs 00:36:44.561 [2024-05-15 11:29:57.899960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.561 [2024-05-15 11:29:58.162213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.561 cpumask for '\''job0'\'' is too big 00:36:44.561 cpumask for '\''job1'\'' is too big 00:36:44.561 cpumask for '\''job2'\'' is too big 00:36:44.561 cpumask for '\''job3'\'' is too big 00:36:44.561 Running I/O for 2 seconds... 00:36:44.561 00:36:44.561 Latency(us) 00:36:44.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.01 34929.38 34.11 0.00 0.00 7325.45 1601.16 11617.75 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.01 34939.70 34.12 0.00 0.00 7320.31 1712.87 11617.75 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34932.80 34.11 0.00 0.00 7313.15 1452.22 10128.29 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34925.00 34.11 0.00 0.00 7311.60 1608.61 10247.45 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34918.19 34.10 0.00 0.00 7304.49 1452.22 9472.93 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34910.61 34.09 0.00 0.00 7302.65 1616.06 9472.93 00:36:44.561 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc0 : 2.02 34903.82 34.09 0.00 0.00 7295.09 1444.77 9532.51 00:36:44.561 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:36:44.561 Malloc1 : 2.02 34896.18 34.08 0.00 0.00 7294.38 1630.95 9592.09 00:36:44.561 =================================================================================================================== 00:36:44.561 Total : 279355.68 272.81 0.00 0.00 7308.38 1444.77 11617.75' 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:36:44.561 11:30:02 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:36:44.561 00:36:44.561 real 0m18.442s 00:36:44.561 user 0m16.388s 00:36:44.561 sys 0m1.224s 00:36:44.561 11:30:02 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:44.561 ************************************ 00:36:44.561 END TEST bdevperf_config 00:36:44.561 ************************************ 00:36:44.561 11:30:02 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:36:44.561 11:30:02 -- spdk/autotest.sh@188 -- # uname -s 00:36:44.561 11:30:02 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:36:44.561 11:30:02 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:36:44.561 11:30:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:44.561 11:30:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:44.561 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:36:44.561 ************************************ 00:36:44.561 START TEST reactor_set_interrupt 00:36:44.561 ************************************ 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:36:44.561 * Looking for test storage... 00:36:44.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:44.561 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:36:44.561 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_PGO_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TESTS=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_APPS=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_ISAL_CRYPTO=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_LIBDIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_DPDK_COMPRESSDEV=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_DAOS_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ISCSI_INITIATOR=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_DPDK_PKG_CONFIG=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ASAN=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_LTO=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_CET=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_FUZZER=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_USDT=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_VTUNE=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_VHOST=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_WPDK_DIR= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_URING=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_SMA=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_IDXD_KERNEL=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FC_PATH= 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_PREFIX=/usr/local 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:36:44.561 11:30:02 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_XNVME=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_RDMA_PROV=verbs 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_RDMA_SET_TOS=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_FUZZER_LIB= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_HAVE_LIBARCHIVE=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_ARCH=native 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_PGO_CAPTURE=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_WERROR=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_DEBUG=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_AVAHI=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_CROSS_PREFIX= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_PGO_USE=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_CRYPTO=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_ARC4RANDOM=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_OPENSSL_PATH= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_EXAMPLES=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_DPDK_INC_DIR= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_EVP_MAC=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_MAX_LCORES= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_VIRTIO=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_IPSEC_MB=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_UBSAN=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_HAVE_EXECINFO_H=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_HAVE_LIBBSD=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_URING_PATH= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_NVME_CUSE=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_URING_ZNS=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_VFIO_USER=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_RAID5F=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_VFIO_USER_DIR= 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_TSAN=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IDXD=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_DPDK_UADK=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_OCF=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_FIO_PLUGIN=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_COVERAGE=y 00:36:44.562 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:36:44.562 #define SPDK_CONFIG_H 00:36:44.562 #define SPDK_CONFIG_APPS 1 00:36:44.562 #define SPDK_CONFIG_ARCH native 00:36:44.562 #define SPDK_CONFIG_ASAN 1 00:36:44.562 #undef SPDK_CONFIG_AVAHI 00:36:44.562 #undef SPDK_CONFIG_CET 00:36:44.562 #define SPDK_CONFIG_COVERAGE 1 00:36:44.562 #define SPDK_CONFIG_CROSS_PREFIX 00:36:44.562 #undef SPDK_CONFIG_CRYPTO 00:36:44.562 #undef SPDK_CONFIG_CRYPTO_MLX5 00:36:44.562 #undef SPDK_CONFIG_CUSTOMOCF 00:36:44.562 #define SPDK_CONFIG_DAOS 1 00:36:44.562 #define SPDK_CONFIG_DAOS_DIR 00:36:44.562 #define SPDK_CONFIG_DEBUG 1 00:36:44.562 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:36:44.562 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:36:44.562 #define SPDK_CONFIG_DPDK_INC_DIR 00:36:44.562 #define SPDK_CONFIG_DPDK_LIB_DIR 00:36:44.562 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:36:44.562 #undef SPDK_CONFIG_DPDK_UADK 00:36:44.562 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:36:44.562 #define SPDK_CONFIG_EXAMPLES 1 00:36:44.562 #undef SPDK_CONFIG_FC 00:36:44.562 #define SPDK_CONFIG_FC_PATH 00:36:44.562 #define SPDK_CONFIG_FIO_PLUGIN 1 00:36:44.562 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:36:44.562 #undef SPDK_CONFIG_FUSE 00:36:44.562 #undef SPDK_CONFIG_FUZZER 00:36:44.562 #define SPDK_CONFIG_FUZZER_LIB 00:36:44.562 #undef SPDK_CONFIG_GOLANG 00:36:44.562 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:36:44.562 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:36:44.562 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:36:44.562 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:36:44.562 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:36:44.562 #undef SPDK_CONFIG_HAVE_LIBBSD 00:36:44.562 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:36:44.562 #define SPDK_CONFIG_IDXD 1 00:36:44.562 #undef SPDK_CONFIG_IDXD_KERNEL 00:36:44.562 #undef SPDK_CONFIG_IPSEC_MB 00:36:44.562 #define SPDK_CONFIG_IPSEC_MB_DIR 00:36:44.562 #undef SPDK_CONFIG_ISAL 00:36:44.562 #undef SPDK_CONFIG_ISAL_CRYPTO 00:36:44.562 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:36:44.562 #define SPDK_CONFIG_LIBDIR 00:36:44.562 #undef SPDK_CONFIG_LTO 00:36:44.562 #define SPDK_CONFIG_MAX_LCORES 00:36:44.562 #define SPDK_CONFIG_NVME_CUSE 1 00:36:44.562 #undef SPDK_CONFIG_OCF 00:36:44.562 #define SPDK_CONFIG_OCF_PATH 00:36:44.562 #define SPDK_CONFIG_OPENSSL_PATH 00:36:44.562 #undef SPDK_CONFIG_PGO_CAPTURE 00:36:44.562 #define SPDK_CONFIG_PGO_DIR 00:36:44.562 #undef SPDK_CONFIG_PGO_USE 00:36:44.562 #define SPDK_CONFIG_PREFIX /usr/local 00:36:44.562 #undef SPDK_CONFIG_RAID5F 00:36:44.562 #undef SPDK_CONFIG_RBD 00:36:44.562 #define SPDK_CONFIG_RDMA 1 00:36:44.562 #define SPDK_CONFIG_RDMA_PROV verbs 00:36:44.562 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:36:44.562 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:36:44.562 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:36:44.562 #undef SPDK_CONFIG_SHARED 00:36:44.562 #undef SPDK_CONFIG_SMA 00:36:44.562 #define SPDK_CONFIG_TESTS 1 00:36:44.562 #undef SPDK_CONFIG_TSAN 00:36:44.562 #undef SPDK_CONFIG_UBLK 00:36:44.562 #undef SPDK_CONFIG_UBSAN 00:36:44.562 #define SPDK_CONFIG_UNIT_TESTS 1 00:36:44.562 #undef SPDK_CONFIG_URING 00:36:44.562 #define SPDK_CONFIG_URING_PATH 00:36:44.562 #undef SPDK_CONFIG_URING_ZNS 00:36:44.562 #undef SPDK_CONFIG_USDT 00:36:44.562 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:36:44.562 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:36:44.562 #undef SPDK_CONFIG_VFIO_USER 00:36:44.562 #define SPDK_CONFIG_VFIO_USER_DIR 00:36:44.562 #define SPDK_CONFIG_VHOST 1 00:36:44.562 #define SPDK_CONFIG_VIRTIO 1 00:36:44.562 #undef SPDK_CONFIG_VTUNE 00:36:44.562 #define SPDK_CONFIG_VTUNE_DIR 00:36:44.562 #define SPDK_CONFIG_WERROR 1 00:36:44.562 #define SPDK_CONFIG_WPDK_DIR 00:36:44.562 #undef SPDK_CONFIG_XNVME 00:36:44.562 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:36:44.562 11:30:02 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:36:44.562 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:44.562 11:30:02 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.562 11:30:02 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.562 11:30:02 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.562 11:30:02 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:44.562 11:30:02 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:44.562 11:30:02 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:44.562 11:30:02 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:36:44.562 11:30:02 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:44.562 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:36:44.562 11:30:02 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:36:44.563 11:30:02 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:36:44.563 11:30:02 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:36:44.563 11:30:02 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:36:44.563 11:30:02 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@57 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@61 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@63 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@65 -- # : 1 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@67 -- # : 1 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@69 -- # : 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@71 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@73 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@75 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@77 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@79 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@81 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@83 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@85 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@87 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@89 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@91 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@93 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@95 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@97 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@99 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@101 -- # : rdma 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@103 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@105 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@107 -- # : 1 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@109 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@111 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@113 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@115 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@117 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@119 -- # : 1 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@121 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@123 -- # : 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@125 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@127 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@129 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@131 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@133 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@135 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@137 -- # : 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@139 -- # : true 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@141 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@143 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@145 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@147 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@149 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@151 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@153 -- # : 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@155 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@157 -- # : 1 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@159 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@161 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@163 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@168 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@170 -- # : 0 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:36:44.563 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@199 -- # cat 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export valgrind= 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@262 -- # valgrind= 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@268 -- # uname -s 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@278 -- # MAKE=make 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@298 -- # TEST_MODE= 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@317 -- # [[ -z 76962 ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@317 -- # kill -0 76962 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local mount target_dir 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.xV1JqF 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.xV1JqF/tests/interrupt /tmp/spdk.xV1JqF 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@326 -- # df -T 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6267637760 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267637760 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6293479424 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6277242880 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=20946944 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6298189824 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=xfs 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=14333689856 00:36:44.564 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=21463302144 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=7129612288 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1259638784 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1259638784 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=93510586368 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=6192193536 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:36:44.565 * Looking for test storage... 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@367 -- # local target_space new_size 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:36:44.565 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # mount=/ 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@373 -- # target_space=14333689856 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ xfs == tmpfs ]] 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ xfs == ramfs ]] 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@380 -- # new_size=9344204800 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@388 -- # return 0 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set -o errtrace 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # true 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # xtrace_fd 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:36:44.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=77006 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 77006 /var/tmp/spdk.sock 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 77006 ']' 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.566 11:30:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:44.566 11:30:02 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:44.566 [2024-05-15 11:30:02.625910] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:44.566 [2024-05-15 11:30:02.626125] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77006 ] 00:36:44.566 [2024-05-15 11:30:02.792537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:44.566 [2024-05-15 11:30:03.061940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.566 [2024-05-15 11:30:03.062074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.566 [2024-05-15 11:30:03.062085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.824 [2024-05-15 11:30:03.392794] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:45.083 11:30:03 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:45.083 11:30:03 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:36:45.083 11:30:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:36:45.083 11:30:03 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:45.341 Malloc0 00:36:45.341 Malloc1 00:36:45.341 Malloc2 00:36:45.341 11:30:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:36:45.341 11:30:03 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:36:45.341 11:30:03 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:45.341 11:30:03 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:36:45.341 5000+0 records in 00:36:45.341 5000+0 records out 00:36:45.341 10240000 bytes (10 MB) copied, 0.0185861 s, 551 MB/s 00:36:45.341 11:30:03 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:36:45.600 AIO0 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 77006 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 77006 without_thd 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=77006 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:45.600 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:36:45.859 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:36:46.118 spdk_thread ids are 1 on reactor0. 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77006 0 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77006 0 idle 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:46.118 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77006 root 20 0 20.1t 122596 13304 S 0.0 1.0 0:00.84 reactor_0' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77006 root 20 0 20.1t 122596 13304 S 0.0 1.0 0:00.84 reactor_0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77006 1 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77006 1 idle 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77011 root 20 0 20.1t 122596 13304 S 0.0 1.0 0:00.00 reactor_1' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77011 root 20 0 20.1t 122596 13304 S 0.0 1.0 0:00.00 reactor_1 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77006 2 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77006 2 idle 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:46.377 11:30:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:46.378 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:46.378 11:30:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77012 root 20 0 20.1t 123140 13304 S 0.0 1.0 0:00.00 reactor_2' 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77012 root 20 0 20.1t 123140 13304 S 0.0 1.0 0:00.00 reactor_2 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:36:46.636 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:36:46.894 [2024-05-15 11:30:05.384094] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:46.894 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:36:47.153 [2024-05-15 11:30:05.635908] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:36:47.153 [2024-05-15 11:30:05.637340] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:47.153 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:36:47.411 [2024-05-15 11:30:05.827890] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:36:47.411 [2024-05-15 11:30:05.829362] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 77006 0 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 77006 0 busy 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:47.411 11:30:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77006 root 20 0 20.1t 123276 13308 R 99.9 1.0 0:01.22 reactor_0' 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77006 root 20 0 20.1t 123276 13308 R 99.9 1.0 0:01.22 reactor_0 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 77006 2 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 77006 2 busy 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:47.411 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77012 root 20 0 20.1t 123276 13308 R 99.9 1.0 0:00.33 reactor_2' 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77012 root 20 0 20.1t 123276 13308 R 99.9 1.0 0:00.33 reactor_2 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:47.670 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:36:47.928 [2024-05-15 11:30:06.395949] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:36:47.928 [2024-05-15 11:30:06.397072] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 77006 2 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77006 2 idle 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:47.928 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77012 root 20 0 20.1t 123340 13308 S 0.0 1.0 0:00.56 reactor_2' 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77012 root 20 0 20.1t 123340 13308 S 0.0 1.0 0:00.56 reactor_2 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:48.186 11:30:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:48.187 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:36:48.187 [2024-05-15 11:30:06.759800] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:36:48.187 [2024-05-15 11:30:06.761370] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:48.187 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:36:48.187 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:36:48.187 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:36:48.445 [2024-05-15 11:30:06.956107] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 77006 0 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77006 0 idle 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77006 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77006 -w 256 00:36:48.445 11:30:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77006 root 20 0 20.1t 123424 13308 S 0.0 1.0 0:01.98 reactor_0' 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77006 root 20 0 20.1t 123424 13308 S 0.0 1.0 0:01.98 reactor_0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:36:48.705 11:30:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 77006 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 77006 ']' 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 77006 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77006 00:36:48.705 killing process with pid 77006 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77006' 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 77006 00:36:48.705 11:30:07 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 77006 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:36:50.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=77156 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:50.081 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 77156 /var/tmp/spdk.sock 00:36:50.082 11:30:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 77156 ']' 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:50.082 11:30:08 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:50.082 [2024-05-15 11:30:08.709545] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:50.082 [2024-05-15 11:30:08.709743] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77156 ] 00:36:50.340 [2024-05-15 11:30:08.880303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:50.599 [2024-05-15 11:30:09.122419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.599 [2024-05-15 11:30:09.122542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.599 [2024-05-15 11:30:09.122551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.857 [2024-05-15 11:30:09.421345] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:51.116 11:30:09 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.116 11:30:09 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:36:51.116 11:30:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:36:51.116 11:30:09 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:51.375 Malloc0 00:36:51.375 Malloc1 00:36:51.375 Malloc2 00:36:51.375 11:30:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:36:51.375 11:30:09 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:36:51.375 11:30:09 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:51.375 11:30:09 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:36:51.375 5000+0 records in 00:36:51.375 5000+0 records out 00:36:51.375 10240000 bytes (10 MB) copied, 0.0151132 s, 678 MB/s 00:36:51.375 11:30:09 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:36:51.634 AIO0 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 77156 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 77156 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=77156 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:36:51.634 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:36:51.893 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:36:52.152 spdk_thread ids are 1 on reactor0. 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77156 0 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77156 0 idle 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77156 root 20 0 20.1t 122604 13308 S 6.7 1.0 0:00.77 reactor_0' 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77156 root 20 0 20.1t 122604 13308 S 6.7 1.0 0:00.77 reactor_0 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77156 1 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77156 1 idle 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:52.152 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77164 root 20 0 20.1t 122604 13308 S 0.0 1.0 0:00.00 reactor_1' 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77164 root 20 0 20.1t 122604 13308 S 0.0 1.0 0:00.00 reactor_1 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 77156 2 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77156 2 idle 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:52.411 11:30:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77165 root 20 0 20.1t 122604 13308 S 0.0 1.0 0:00.00 reactor_2' 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77165 root 20 0 20.1t 122604 13308 S 0.0 1.0 0:00.00 reactor_2 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:36:52.411 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:36:52.671 [2024-05-15 11:30:11.271934] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:36:52.671 [2024-05-15 11:30:11.272216] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:36:52.671 [2024-05-15 11:30:11.273343] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:52.671 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:36:52.930 [2024-05-15 11:30:11.527689] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:36:52.930 [2024-05-15 11:30:11.528492] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 77156 0 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 77156 0 busy 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:52.930 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77156 root 20 0 20.1t 122648 13308 R 99.9 1.0 0:01.22 reactor_0' 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77156 root 20 0 20.1t 122648 13308 R 99.9 1.0 0:01.22 reactor_0 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 77156 2 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 77156 2 busy 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:53.188 11:30:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:53.189 11:30:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:53.189 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:53.189 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77165 root 20 0 20.1t 122648 13308 R 99.9 1.0 0:00.34 reactor_2' 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77165 root 20 0 20.1t 122648 13308 R 99.9 1.0 0:00.34 reactor_2 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:36:53.446 11:30:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:36:53.447 11:30:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:36:53.447 11:30:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:53.447 11:30:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:36:53.705 [2024-05-15 11:30:12.111880] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:36:53.705 [2024-05-15 11:30:12.112076] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 77156 2 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77156 2 idle 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77165 root 20 0 20.1t 122732 13316 S 0.0 1.0 0:00.58 reactor_2' 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77165 root 20 0 20.1t 122732 13316 S 0.0 1.0 0:00.58 reactor_2 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:53.705 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:36:53.964 [2024-05-15 11:30:12.522916] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:36:53.964 [2024-05-15 11:30:12.523324] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:36:53.964 [2024-05-15 11:30:12.523375] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 77156 0 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 77156 0 idle 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=77156 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:36:53.964 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 77156 -w 256 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 77156 root 20 0 20.1t 122796 13316 S 0.0 1.0 0:02.02 reactor_0' 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 77156 root 20 0 20.1t 122796 13316 S 0.0 1.0 0:02.02 reactor_0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:36:54.223 11:30:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 77156 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 77156 ']' 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 77156 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77156 00:36:54.223 killing process with pid 77156 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77156' 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 77156 00:36:54.223 11:30:12 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 77156 00:36:55.652 11:30:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:36:55.652 11:30:14 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:36:55.652 ************************************ 00:36:55.652 END TEST reactor_set_interrupt 00:36:55.652 ************************************ 00:36:55.652 00:36:55.652 real 0m11.742s 00:36:55.652 user 0m12.401s 00:36:55.652 sys 0m1.481s 00:36:55.652 11:30:14 reactor_set_interrupt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:55.652 11:30:14 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:55.652 11:30:14 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:36:55.652 11:30:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:55.652 11:30:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:55.652 11:30:14 -- common/autotest_common.sh@10 -- # set +x 00:36:55.652 ************************************ 00:36:55.652 START TEST reap_unregistered_poller 00:36:55.652 ************************************ 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:36:55.652 * Looking for test storage... 00:36:55.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:55.652 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:36:55.652 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_PGO_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TESTS=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_APPS=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_ISAL_CRYPTO=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_LIBDIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_DPDK_COMPRESSDEV=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_DAOS_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ISCSI_INITIATOR=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_DPDK_PKG_CONFIG=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ASAN=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_LTO=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_CET=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_FUZZER=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_USDT=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_VTUNE=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_VHOST=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_WPDK_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_URING=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_SMA=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_IDXD_KERNEL=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FC_PATH= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_PREFIX=/usr/local 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_XNVME=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_RDMA_PROV=verbs 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_RDMA_SET_TOS=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_FUZZER_LIB= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_HAVE_LIBARCHIVE=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_ARCH=native 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_PGO_CAPTURE=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_WERROR=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_DEBUG=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_AVAHI=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_CROSS_PREFIX= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_PGO_USE=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_CRYPTO=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_ARC4RANDOM=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_OPENSSL_PATH= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_EXAMPLES=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_DPDK_INC_DIR= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_EVP_MAC=n 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_MAX_LCORES= 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_VIRTIO=y 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:36:55.652 11:30:14 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_IPSEC_MB=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_UBSAN=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_HAVE_EXECINFO_H=y 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_HAVE_LIBBSD=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_URING_PATH= 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_NVME_CUSE=y 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_URING_ZNS=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_VFIO_USER=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_RAID5F=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_VFIO_USER_DIR= 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_TSAN=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IDXD=y 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_DPDK_UADK=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_OCF=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_FIO_PLUGIN=y 00:36:55.653 11:30:14 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_COVERAGE=y 00:36:55.653 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:36:55.653 #define SPDK_CONFIG_H 00:36:55.653 #define SPDK_CONFIG_APPS 1 00:36:55.653 #define SPDK_CONFIG_ARCH native 00:36:55.653 #define SPDK_CONFIG_ASAN 1 00:36:55.653 #undef SPDK_CONFIG_AVAHI 00:36:55.653 #undef SPDK_CONFIG_CET 00:36:55.653 #define SPDK_CONFIG_COVERAGE 1 00:36:55.653 #define SPDK_CONFIG_CROSS_PREFIX 00:36:55.653 #undef SPDK_CONFIG_CRYPTO 00:36:55.653 #undef SPDK_CONFIG_CRYPTO_MLX5 00:36:55.653 #undef SPDK_CONFIG_CUSTOMOCF 00:36:55.653 #define SPDK_CONFIG_DAOS 1 00:36:55.653 #define SPDK_CONFIG_DAOS_DIR 00:36:55.653 #define SPDK_CONFIG_DEBUG 1 00:36:55.653 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:36:55.653 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:36:55.653 #define SPDK_CONFIG_DPDK_INC_DIR 00:36:55.653 #define SPDK_CONFIG_DPDK_LIB_DIR 00:36:55.653 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:36:55.653 #undef SPDK_CONFIG_DPDK_UADK 00:36:55.653 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:36:55.653 #define SPDK_CONFIG_EXAMPLES 1 00:36:55.653 #undef SPDK_CONFIG_FC 00:36:55.653 #define SPDK_CONFIG_FC_PATH 00:36:55.653 #define SPDK_CONFIG_FIO_PLUGIN 1 00:36:55.653 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:36:55.653 #undef SPDK_CONFIG_FUSE 00:36:55.653 #undef SPDK_CONFIG_FUZZER 00:36:55.653 #define SPDK_CONFIG_FUZZER_LIB 00:36:55.653 #undef SPDK_CONFIG_GOLANG 00:36:55.653 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:36:55.653 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:36:55.653 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:36:55.653 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:36:55.653 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:36:55.653 #undef SPDK_CONFIG_HAVE_LIBBSD 00:36:55.653 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:36:55.653 #define SPDK_CONFIG_IDXD 1 00:36:55.653 #undef SPDK_CONFIG_IDXD_KERNEL 00:36:55.653 #undef SPDK_CONFIG_IPSEC_MB 00:36:55.653 #define SPDK_CONFIG_IPSEC_MB_DIR 00:36:55.653 #undef SPDK_CONFIG_ISAL 00:36:55.653 #undef SPDK_CONFIG_ISAL_CRYPTO 00:36:55.653 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:36:55.653 #define SPDK_CONFIG_LIBDIR 00:36:55.653 #undef SPDK_CONFIG_LTO 00:36:55.653 #define SPDK_CONFIG_MAX_LCORES 00:36:55.653 #define SPDK_CONFIG_NVME_CUSE 1 00:36:55.653 #undef SPDK_CONFIG_OCF 00:36:55.653 #define SPDK_CONFIG_OCF_PATH 00:36:55.653 #define SPDK_CONFIG_OPENSSL_PATH 00:36:55.653 #undef SPDK_CONFIG_PGO_CAPTURE 00:36:55.653 #define SPDK_CONFIG_PGO_DIR 00:36:55.653 #undef SPDK_CONFIG_PGO_USE 00:36:55.653 #define SPDK_CONFIG_PREFIX /usr/local 00:36:55.653 #undef SPDK_CONFIG_RAID5F 00:36:55.653 #undef SPDK_CONFIG_RBD 00:36:55.653 #define SPDK_CONFIG_RDMA 1 00:36:55.653 #define SPDK_CONFIG_RDMA_PROV verbs 00:36:55.653 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:36:55.653 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:36:55.653 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:36:55.653 #undef SPDK_CONFIG_SHARED 00:36:55.653 #undef SPDK_CONFIG_SMA 00:36:55.653 #define SPDK_CONFIG_TESTS 1 00:36:55.653 #undef SPDK_CONFIG_TSAN 00:36:55.653 #undef SPDK_CONFIG_UBLK 00:36:55.653 #undef SPDK_CONFIG_UBSAN 00:36:55.653 #define SPDK_CONFIG_UNIT_TESTS 1 00:36:55.653 #undef SPDK_CONFIG_URING 00:36:55.653 #define SPDK_CONFIG_URING_PATH 00:36:55.653 #undef SPDK_CONFIG_URING_ZNS 00:36:55.653 #undef SPDK_CONFIG_USDT 00:36:55.653 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:36:55.653 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:36:55.653 #undef SPDK_CONFIG_VFIO_USER 00:36:55.653 #define SPDK_CONFIG_VFIO_USER_DIR 00:36:55.653 #define SPDK_CONFIG_VHOST 1 00:36:55.653 #define SPDK_CONFIG_VIRTIO 1 00:36:55.653 #undef SPDK_CONFIG_VTUNE 00:36:55.653 #define SPDK_CONFIG_VTUNE_DIR 00:36:55.653 #define SPDK_CONFIG_WERROR 1 00:36:55.653 #define SPDK_CONFIG_WPDK_DIR 00:36:55.653 #undef SPDK_CONFIG_XNVME 00:36:55.653 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:36:55.653 11:30:14 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:36:55.653 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:55.653 11:30:14 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.653 11:30:14 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.653 11:30:14 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.653 11:30:14 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:55.653 11:30:14 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:55.653 11:30:14 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:55.653 11:30:14 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:36:55.653 11:30:14 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:55.653 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:36:55.653 11:30:14 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:36:55.654 11:30:14 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@57 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@61 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@63 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@65 -- # : 1 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@67 -- # : 1 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@69 -- # : 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@71 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@73 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@75 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@77 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@79 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@81 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@83 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@85 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@87 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@89 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@91 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@93 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@95 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@97 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@99 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@101 -- # : rdma 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@103 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@105 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@107 -- # : 1 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@109 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@111 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@113 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@115 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@117 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@119 -- # : 1 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@121 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@123 -- # : 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@125 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@127 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@129 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@131 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@133 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@135 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@137 -- # : 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@139 -- # : true 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@141 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@143 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@145 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@147 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@149 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@151 -- # : 0 00:36:55.654 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@153 -- # : 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@155 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@157 -- # : 1 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@159 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@161 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@163 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@168 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@170 -- # : 0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@199 -- # cat 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export valgrind= 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@262 -- # valgrind= 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@268 -- # uname -s 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@278 -- # MAKE=make 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@298 -- # TEST_MODE= 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@317 -- # [[ -z 77350 ]] 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@317 -- # kill -0 77350 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local mount target_dir 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:36:55.655 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.9CWgqI 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.9CWgqI/tests/interrupt /tmp/spdk.9CWgqI 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@326 -- # df -T 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6267637760 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267637760 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6293479424 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6277242880 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=20946944 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6298189824 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=xfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=14333706240 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=21463302144 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=7129595904 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1259638784 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1259638784 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=93508509696 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=6194270208 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:36:55.656 * Looking for test storage... 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@367 -- # local target_space new_size 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@371 -- # mount=/ 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@373 -- # target_space=14333706240 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ xfs == tmpfs ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ xfs == ramfs ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@380 -- # new_size=9344188416 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@388 -- # return 0 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set -o errtrace 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # true 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # xtrace_fd 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:36:55.656 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:36:55.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:36:55.656 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=77394 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 77394 /var/tmp/spdk.sock 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@827 -- # '[' -z 77394 ']' 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.657 11:30:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:55.657 11:30:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:36:55.916 [2024-05-15 11:30:14.403271] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:55.916 [2024-05-15 11:30:14.403470] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77394 ] 00:36:56.174 [2024-05-15 11:30:14.572396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:56.174 [2024-05-15 11:30:14.804771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.174 [2024-05-15 11:30:14.804902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:56.174 [2024-05-15 11:30:14.804912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.741 [2024-05-15 11:30:15.120175] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:56.741 11:30:15 reap_unregistered_poller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:56.741 11:30:15 reap_unregistered_poller -- common/autotest_common.sh@860 -- # return 0 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:36:56.741 11:30:15 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.741 11:30:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:36:56.741 11:30:15 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:36:56.741 "name": "app_thread", 00:36:56.741 "id": 1, 00:36:56.741 "active_pollers": [], 00:36:56.741 "timed_pollers": [ 00:36:56.741 { 00:36:56.741 "name": "rpc_subsystem_poll_servers", 00:36:56.741 "id": 1, 00:36:56.741 "state": "waiting", 00:36:56.741 "run_count": 0, 00:36:56.741 "busy_count": 0, 00:36:56.741 "period_ticks": 8800000 00:36:56.741 } 00:36:56.741 ], 00:36:56.741 "paused_pollers": [] 00:36:56.741 }' 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:36:56.741 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:36:57.000 5000+0 records in 00:36:57.000 5000+0 records out 00:36:57.000 10240000 bytes (10 MB) copied, 0.0246221 s, 416 MB/s 00:36:57.000 11:30:15 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:36:57.259 AIO0 00:36:57.259 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:57.518 11:30:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:36:57.518 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:36:57.518 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:36:57.518 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:36:57.518 "name": "app_thread", 00:36:57.518 "id": 1, 00:36:57.518 "active_pollers": [], 00:36:57.518 "timed_pollers": [ 00:36:57.518 { 00:36:57.518 "name": "rpc_subsystem_poll_servers", 00:36:57.518 "id": 1, 00:36:57.518 "state": "waiting", 00:36:57.518 "run_count": 0, 00:36:57.518 "busy_count": 0, 00:36:57.518 "period_ticks": 8800000 00:36:57.518 } 00:36:57.518 ], 00:36:57.518 "paused_pollers": [] 00:36:57.518 }' 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:36:57.518 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:36:57.777 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:36:57.777 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:36:57.777 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:36:57.777 11:30:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 77394 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@946 -- # '[' -z 77394 ']' 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@950 -- # kill -0 77394 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@951 -- # uname 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77394 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:57.777 killing process with pid 77394 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77394' 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@965 -- # kill 77394 00:36:57.777 11:30:16 reap_unregistered_poller -- common/autotest_common.sh@970 -- # wait 77394 00:36:59.155 11:30:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:36:59.155 11:30:17 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:36:59.155 ************************************ 00:36:59.155 END TEST reap_unregistered_poller 00:36:59.155 ************************************ 00:36:59.155 00:36:59.155 real 0m3.296s 00:36:59.155 user 0m2.828s 00:36:59.155 sys 0m0.544s 00:36:59.155 11:30:17 reap_unregistered_poller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:59.155 11:30:17 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:36:59.155 11:30:17 -- spdk/autotest.sh@194 -- # uname -s 00:36:59.155 11:30:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:36:59.155 11:30:17 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:36:59.155 11:30:17 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:36:59.155 11:30:17 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:36:59.155 11:30:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:59.155 11:30:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:59.155 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:36:59.155 ************************************ 00:36:59.155 START TEST spdk_dd 00:36:59.155 ************************************ 00:36:59.155 11:30:17 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:36:59.155 * Looking for test storage... 00:36:59.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.156 11:30:17 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.156 11:30:17 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.156 11:30:17 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.156 11:30:17 spdk_dd -- paths/export.sh@5 -- # export PATH 00:36:59.156 11:30:17 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:59.156 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:36:59.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:36:59.156 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:59.156 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@230 -- # local class 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@232 -- # local progif 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@233 -- # class=01 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@15 -- # local i 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@24 -- # return 0 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:36:59.156 11:30:17 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@139 -- # local lib so 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libdaos.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libdaos_common.so == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libdfs.so == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libgurt.so.4 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libisal.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libcart.so.4 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ liblz4.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libprotobuf-c.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libyaml-0.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libmercury_hl.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libmercury.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libmercury_util.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libna.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libfabric.so.1 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@143 -- # [[ libpsm2.so.2 == liburing.so.* ]] 00:36:59.156 11:30:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:36:59.156 11:30:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:36:59.156 11:30:17 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:59.156 11:30:17 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:59.157 11:30:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:36:59.157 ************************************ 00:36:59.157 START TEST spdk_dd_basic_rw 00:36:59.157 ************************************ 00:36:59.157 11:30:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:36:59.416 * Looking for test storage... 00:36:59.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:36:59.416 11:30:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:36:59.677 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 92 Data Units Written: 204 Host Read Commands: 1783 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:36:59.677 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 92 Data Units Written: 204 Host Read Commands: 1783 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:36:59.678 ************************************ 00:36:59.678 START TEST dd_bs_lt_native_bs 00:36:59.678 ************************************ 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:59.678 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:36:59.678 { 00:36:59.678 "subsystems": [ 00:36:59.678 { 00:36:59.678 "subsystem": "bdev", 00:36:59.678 "config": [ 00:36:59.678 { 00:36:59.678 "params": { 00:36:59.678 "trtype": "pcie", 00:36:59.678 "name": "Nvme0", 00:36:59.678 "traddr": "0000:00:10.0" 00:36:59.678 }, 00:36:59.678 "method": "bdev_nvme_attach_controller" 00:36:59.678 }, 00:36:59.678 { 00:36:59.678 "method": "bdev_wait_for_examine" 00:36:59.678 } 00:36:59.678 ] 00:36:59.678 } 00:36:59.678 ] 00:36:59.678 } 00:36:59.936 [2024-05-15 11:30:18.368357] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:36:59.936 [2024-05-15 11:30:18.368559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77676 ] 00:36:59.936 [2024-05-15 11:30:18.542534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.194 [2024-05-15 11:30:18.796455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.762 [2024-05-15 11:30:19.226246] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:37:00.762 [2024-05-15 11:30:19.226333] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:01.698 [2024-05-15 11:30:20.072709] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:01.957 ************************************ 00:37:01.957 END TEST dd_bs_lt_native_bs 00:37:01.957 ************************************ 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:01.957 00:37:01.957 real 0m2.235s 00:37:01.957 user 0m1.832s 00:37:01.957 sys 0m0.268s 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:01.957 ************************************ 00:37:01.957 START TEST dd_rw 00:37:01.957 ************************************ 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:01.957 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:02.894 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:37:02.894 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:02.894 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:02.894 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:02.894 { 00:37:02.894 "subsystems": [ 00:37:02.894 { 00:37:02.894 "subsystem": "bdev", 00:37:02.894 "config": [ 00:37:02.894 { 00:37:02.894 "params": { 00:37:02.894 "trtype": "pcie", 00:37:02.894 "name": "Nvme0", 00:37:02.894 "traddr": "0000:00:10.0" 00:37:02.894 }, 00:37:02.894 "method": "bdev_nvme_attach_controller" 00:37:02.894 }, 00:37:02.894 { 00:37:02.894 "method": "bdev_wait_for_examine" 00:37:02.894 } 00:37:02.894 ] 00:37:02.894 } 00:37:02.894 ] 00:37:02.894 } 00:37:02.894 [2024-05-15 11:30:21.401478] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:02.894 [2024-05-15 11:30:21.401644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77734 ] 00:37:03.153 [2024-05-15 11:30:21.575846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.412 [2024-05-15 11:30:21.798865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.108  Copying: 60/60 [kB] (average 29 MBps) 00:37:05.108 00:37:05.108 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:37:05.108 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:05.108 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:05.108 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:05.108 { 00:37:05.108 "subsystems": [ 00:37:05.108 { 00:37:05.108 "subsystem": "bdev", 00:37:05.108 "config": [ 00:37:05.108 { 00:37:05.108 "params": { 00:37:05.108 "trtype": "pcie", 00:37:05.108 "name": "Nvme0", 00:37:05.108 "traddr": "0000:00:10.0" 00:37:05.108 }, 00:37:05.108 "method": "bdev_nvme_attach_controller" 00:37:05.108 }, 00:37:05.108 { 00:37:05.108 "method": "bdev_wait_for_examine" 00:37:05.108 } 00:37:05.108 ] 00:37:05.108 } 00:37:05.108 ] 00:37:05.108 } 00:37:05.108 [2024-05-15 11:30:23.641266] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:05.108 [2024-05-15 11:30:23.641437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77770 ] 00:37:05.366 [2024-05-15 11:30:23.800083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.625 [2024-05-15 11:30:24.025930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.257  Copying: 60/60 [kB] (average 29 MBps) 00:37:07.257 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:07.257 11:30:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:07.257 { 00:37:07.257 "subsystems": [ 00:37:07.257 { 00:37:07.257 "subsystem": "bdev", 00:37:07.257 "config": [ 00:37:07.257 { 00:37:07.257 "params": { 00:37:07.257 "trtype": "pcie", 00:37:07.257 "name": "Nvme0", 00:37:07.257 "traddr": "0000:00:10.0" 00:37:07.257 }, 00:37:07.257 "method": "bdev_nvme_attach_controller" 00:37:07.257 }, 00:37:07.257 { 00:37:07.257 "method": "bdev_wait_for_examine" 00:37:07.257 } 00:37:07.257 ] 00:37:07.257 } 00:37:07.257 ] 00:37:07.257 } 00:37:07.257 [2024-05-15 11:30:25.869559] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:07.257 [2024-05-15 11:30:25.869720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77799 ] 00:37:07.515 [2024-05-15 11:30:26.021861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.773 [2024-05-15 11:30:26.243403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.715  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:09.715 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:09.715 11:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:10.293 11:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:37:10.293 11:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:10.293 11:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:10.293 11:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:10.293 { 00:37:10.293 "subsystems": [ 00:37:10.293 { 00:37:10.293 "subsystem": "bdev", 00:37:10.293 "config": [ 00:37:10.293 { 00:37:10.293 "params": { 00:37:10.293 "trtype": "pcie", 00:37:10.293 "name": "Nvme0", 00:37:10.294 "traddr": "0000:00:10.0" 00:37:10.294 }, 00:37:10.294 "method": "bdev_nvme_attach_controller" 00:37:10.294 }, 00:37:10.294 { 00:37:10.294 "method": "bdev_wait_for_examine" 00:37:10.294 } 00:37:10.294 ] 00:37:10.294 } 00:37:10.294 ] 00:37:10.294 } 00:37:10.294 [2024-05-15 11:30:28.907165] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:10.294 [2024-05-15 11:30:28.907364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77848 ] 00:37:10.578 [2024-05-15 11:30:29.076757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.836 [2024-05-15 11:30:29.287555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.467  Copying: 60/60 [kB] (average 58 MBps) 00:37:12.467 00:37:12.467 11:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:37:12.467 11:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:12.467 11:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:12.467 11:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:12.467 { 00:37:12.467 "subsystems": [ 00:37:12.467 { 00:37:12.467 "subsystem": "bdev", 00:37:12.467 "config": [ 00:37:12.467 { 00:37:12.467 "params": { 00:37:12.467 "trtype": "pcie", 00:37:12.467 "name": "Nvme0", 00:37:12.467 "traddr": "0000:00:10.0" 00:37:12.467 }, 00:37:12.467 "method": "bdev_nvme_attach_controller" 00:37:12.467 }, 00:37:12.467 { 00:37:12.467 "method": "bdev_wait_for_examine" 00:37:12.467 } 00:37:12.467 ] 00:37:12.467 } 00:37:12.467 ] 00:37:12.467 } 00:37:12.467 [2024-05-15 11:30:31.099996] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:12.467 [2024-05-15 11:30:31.100161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77875 ] 00:37:12.725 [2024-05-15 11:30:31.270747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.983 [2024-05-15 11:30:31.492752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.946  Copying: 60/60 [kB] (average 58 MBps) 00:37:14.946 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:14.946 11:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:14.946 { 00:37:14.946 "subsystems": [ 00:37:14.946 { 00:37:14.946 "subsystem": "bdev", 00:37:14.946 "config": [ 00:37:14.946 { 00:37:14.946 "params": { 00:37:14.946 "trtype": "pcie", 00:37:14.946 "name": "Nvme0", 00:37:14.946 "traddr": "0000:00:10.0" 00:37:14.946 }, 00:37:14.946 "method": "bdev_nvme_attach_controller" 00:37:14.946 }, 00:37:14.946 { 00:37:14.946 "method": "bdev_wait_for_examine" 00:37:14.946 } 00:37:14.946 ] 00:37:14.946 } 00:37:14.946 ] 00:37:14.946 } 00:37:14.946 [2024-05-15 11:30:33.329992] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:14.947 [2024-05-15 11:30:33.330159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77908 ] 00:37:14.947 [2024-05-15 11:30:33.480606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.205 [2024-05-15 11:30:33.684747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.709  Copying: 1024/1024 [kB] (average 500 MBps) 00:37:16.709 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:16.967 11:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:17.535 11:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:37:17.535 11:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:17.535 11:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:17.535 11:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:17.535 { 00:37:17.535 "subsystems": [ 00:37:17.535 { 00:37:17.535 "subsystem": "bdev", 00:37:17.535 "config": [ 00:37:17.535 { 00:37:17.535 "params": { 00:37:17.535 "trtype": "pcie", 00:37:17.535 "name": "Nvme0", 00:37:17.535 "traddr": "0000:00:10.0" 00:37:17.535 }, 00:37:17.535 "method": "bdev_nvme_attach_controller" 00:37:17.535 }, 00:37:17.535 { 00:37:17.535 "method": "bdev_wait_for_examine" 00:37:17.535 } 00:37:17.535 ] 00:37:17.535 } 00:37:17.535 ] 00:37:17.535 } 00:37:17.535 [2024-05-15 11:30:36.160659] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:17.535 [2024-05-15 11:30:36.161178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77947 ] 00:37:17.794 [2024-05-15 11:30:36.333664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.053 [2024-05-15 11:30:36.575845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.997  Copying: 56/56 [kB] (average 54 MBps) 00:37:19.998 00:37:19.998 11:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:37:19.998 11:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:19.998 11:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:19.998 11:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:19.998 { 00:37:19.998 "subsystems": [ 00:37:19.998 { 00:37:19.998 "subsystem": "bdev", 00:37:19.998 "config": [ 00:37:19.998 { 00:37:19.998 "params": { 00:37:19.998 "trtype": "pcie", 00:37:19.998 "name": "Nvme0", 00:37:19.998 "traddr": "0000:00:10.0" 00:37:19.998 }, 00:37:19.998 "method": "bdev_nvme_attach_controller" 00:37:19.998 }, 00:37:19.998 { 00:37:19.998 "method": "bdev_wait_for_examine" 00:37:19.998 } 00:37:19.998 ] 00:37:19.998 } 00:37:19.998 ] 00:37:19.998 } 00:37:19.998 [2024-05-15 11:30:38.399365] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:19.998 [2024-05-15 11:30:38.399570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77979 ] 00:37:19.998 [2024-05-15 11:30:38.576299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.256 [2024-05-15 11:30:38.828551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.200  Copying: 56/56 [kB] (average 54 MBps) 00:37:22.200 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:22.200 11:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:22.200 { 00:37:22.200 "subsystems": [ 00:37:22.200 { 00:37:22.200 "subsystem": "bdev", 00:37:22.200 "config": [ 00:37:22.200 { 00:37:22.200 "params": { 00:37:22.200 "trtype": "pcie", 00:37:22.200 "name": "Nvme0", 00:37:22.200 "traddr": "0000:00:10.0" 00:37:22.200 }, 00:37:22.200 "method": "bdev_nvme_attach_controller" 00:37:22.200 }, 00:37:22.200 { 00:37:22.200 "method": "bdev_wait_for_examine" 00:37:22.200 } 00:37:22.200 ] 00:37:22.200 } 00:37:22.200 ] 00:37:22.200 } 00:37:22.200 [2024-05-15 11:30:40.683315] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:22.200 [2024-05-15 11:30:40.683525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78015 ] 00:37:22.460 [2024-05-15 11:30:40.851679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.719 [2024-05-15 11:30:41.121992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.353  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:24.353 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:24.353 11:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 11:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:37:25.287 11:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:25.287 11:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:25.287 11:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 { 00:37:25.287 "subsystems": [ 00:37:25.287 { 00:37:25.287 "subsystem": "bdev", 00:37:25.287 "config": [ 00:37:25.287 { 00:37:25.287 "params": { 00:37:25.287 "trtype": "pcie", 00:37:25.287 "name": "Nvme0", 00:37:25.287 "traddr": "0000:00:10.0" 00:37:25.287 }, 00:37:25.287 "method": "bdev_nvme_attach_controller" 00:37:25.287 }, 00:37:25.287 { 00:37:25.288 "method": "bdev_wait_for_examine" 00:37:25.288 } 00:37:25.288 ] 00:37:25.288 } 00:37:25.288 ] 00:37:25.288 } 00:37:25.288 [2024-05-15 11:30:43.754654] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:25.288 [2024-05-15 11:30:43.755078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78055 ] 00:37:25.288 [2024-05-15 11:30:43.905250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.546 [2024-05-15 11:30:44.118538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.487  Copying: 56/56 [kB] (average 54 MBps) 00:37:27.487 00:37:27.487 11:30:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:37:27.487 11:30:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:27.487 11:30:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:27.487 11:30:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:27.487 { 00:37:27.487 "subsystems": [ 00:37:27.487 { 00:37:27.487 "subsystem": "bdev", 00:37:27.487 "config": [ 00:37:27.487 { 00:37:27.487 "params": { 00:37:27.487 "trtype": "pcie", 00:37:27.487 "name": "Nvme0", 00:37:27.487 "traddr": "0000:00:10.0" 00:37:27.487 }, 00:37:27.487 "method": "bdev_nvme_attach_controller" 00:37:27.487 }, 00:37:27.487 { 00:37:27.487 "method": "bdev_wait_for_examine" 00:37:27.487 } 00:37:27.487 ] 00:37:27.487 } 00:37:27.487 ] 00:37:27.487 } 00:37:27.487 [2024-05-15 11:30:45.903996] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:27.487 [2024-05-15 11:30:45.904183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78093 ] 00:37:27.487 [2024-05-15 11:30:46.057712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.745 [2024-05-15 11:30:46.289746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.686  Copying: 56/56 [kB] (average 54 MBps) 00:37:29.686 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:29.686 11:30:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:29.686 { 00:37:29.686 "subsystems": [ 00:37:29.686 { 00:37:29.686 "subsystem": "bdev", 00:37:29.686 "config": [ 00:37:29.686 { 00:37:29.686 "params": { 00:37:29.686 "trtype": "pcie", 00:37:29.686 "name": "Nvme0", 00:37:29.686 "traddr": "0000:00:10.0" 00:37:29.686 }, 00:37:29.686 "method": "bdev_nvme_attach_controller" 00:37:29.686 }, 00:37:29.686 { 00:37:29.686 "method": "bdev_wait_for_examine" 00:37:29.686 } 00:37:29.686 ] 00:37:29.686 } 00:37:29.686 ] 00:37:29.686 } 00:37:29.686 [2024-05-15 11:30:48.171410] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:29.686 [2024-05-15 11:30:48.171637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78122 ] 00:37:29.945 [2024-05-15 11:30:48.335259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.202 [2024-05-15 11:30:48.595281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.831  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:31.831 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:31.831 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:32.397 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:37:32.397 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:32.397 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:32.397 11:30:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:32.397 { 00:37:32.397 "subsystems": [ 00:37:32.397 { 00:37:32.397 "subsystem": "bdev", 00:37:32.397 "config": [ 00:37:32.397 { 00:37:32.397 "params": { 00:37:32.397 "trtype": "pcie", 00:37:32.397 "name": "Nvme0", 00:37:32.397 "traddr": "0000:00:10.0" 00:37:32.397 }, 00:37:32.397 "method": "bdev_nvme_attach_controller" 00:37:32.397 }, 00:37:32.397 { 00:37:32.397 "method": "bdev_wait_for_examine" 00:37:32.397 } 00:37:32.397 ] 00:37:32.397 } 00:37:32.397 ] 00:37:32.397 } 00:37:32.655 [2024-05-15 11:30:51.070538] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:32.655 [2024-05-15 11:30:51.070765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78165 ] 00:37:32.655 [2024-05-15 11:30:51.236569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.914 [2024-05-15 11:30:51.491748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.851  Copying: 48/48 [kB] (average 46 MBps) 00:37:34.851 00:37:34.851 11:30:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:37:34.851 11:30:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:34.851 11:30:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:34.851 11:30:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:34.851 { 00:37:34.851 "subsystems": [ 00:37:34.851 { 00:37:34.851 "subsystem": "bdev", 00:37:34.851 "config": [ 00:37:34.851 { 00:37:34.851 "params": { 00:37:34.851 "trtype": "pcie", 00:37:34.851 "name": "Nvme0", 00:37:34.851 "traddr": "0000:00:10.0" 00:37:34.851 }, 00:37:34.851 "method": "bdev_nvme_attach_controller" 00:37:34.851 }, 00:37:34.851 { 00:37:34.851 "method": "bdev_wait_for_examine" 00:37:34.851 } 00:37:34.851 ] 00:37:34.851 } 00:37:34.851 ] 00:37:34.851 } 00:37:34.851 [2024-05-15 11:30:53.329794] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:34.851 [2024-05-15 11:30:53.329996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78197 ] 00:37:34.851 [2024-05-15 11:30:53.480593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.109 [2024-05-15 11:30:53.720160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.049  Copying: 48/48 [kB] (average 46 MBps) 00:37:37.049 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:37.049 11:30:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:37.049 { 00:37:37.049 "subsystems": [ 00:37:37.049 { 00:37:37.049 "subsystem": "bdev", 00:37:37.049 "config": [ 00:37:37.049 { 00:37:37.049 "params": { 00:37:37.049 "trtype": "pcie", 00:37:37.049 "name": "Nvme0", 00:37:37.049 "traddr": "0000:00:10.0" 00:37:37.049 }, 00:37:37.049 "method": "bdev_nvme_attach_controller" 00:37:37.049 }, 00:37:37.049 { 00:37:37.049 "method": "bdev_wait_for_examine" 00:37:37.049 } 00:37:37.049 ] 00:37:37.049 } 00:37:37.049 ] 00:37:37.049 } 00:37:37.049 [2024-05-15 11:30:55.619074] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:37.049 [2024-05-15 11:30:55.619334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78230 ] 00:37:37.307 [2024-05-15 11:30:55.767462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.565 [2024-05-15 11:30:55.984430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.198  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:39.198 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:39.198 11:30:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:40.132 11:30:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:37:40.132 11:30:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:40.132 11:30:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:40.132 11:30:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:40.132 { 00:37:40.132 "subsystems": [ 00:37:40.132 { 00:37:40.132 "subsystem": "bdev", 00:37:40.132 "config": [ 00:37:40.132 { 00:37:40.132 "params": { 00:37:40.132 "trtype": "pcie", 00:37:40.132 "name": "Nvme0", 00:37:40.132 "traddr": "0000:00:10.0" 00:37:40.132 }, 00:37:40.132 "method": "bdev_nvme_attach_controller" 00:37:40.132 }, 00:37:40.132 { 00:37:40.132 "method": "bdev_wait_for_examine" 00:37:40.132 } 00:37:40.132 ] 00:37:40.132 } 00:37:40.132 ] 00:37:40.132 } 00:37:40.132 [2024-05-15 11:30:58.555152] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:40.133 [2024-05-15 11:30:58.555336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78269 ] 00:37:40.133 [2024-05-15 11:30:58.708826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.390 [2024-05-15 11:30:58.925338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.323  Copying: 48/48 [kB] (average 46 MBps) 00:37:42.323 00:37:42.323 11:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:37:42.323 11:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:42.323 11:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:42.323 11:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:42.323 { 00:37:42.323 "subsystems": [ 00:37:42.323 { 00:37:42.323 "subsystem": "bdev", 00:37:42.323 "config": [ 00:37:42.323 { 00:37:42.323 "params": { 00:37:42.323 "trtype": "pcie", 00:37:42.323 "name": "Nvme0", 00:37:42.323 "traddr": "0000:00:10.0" 00:37:42.323 }, 00:37:42.323 "method": "bdev_nvme_attach_controller" 00:37:42.323 }, 00:37:42.323 { 00:37:42.323 "method": "bdev_wait_for_examine" 00:37:42.323 } 00:37:42.323 ] 00:37:42.323 } 00:37:42.323 ] 00:37:42.323 } 00:37:42.323 [2024-05-15 11:31:00.864565] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:42.323 [2024-05-15 11:31:00.864755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78304 ] 00:37:42.581 [2024-05-15 11:31:01.021669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.838 [2024-05-15 11:31:01.280066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.839  Copying: 48/48 [kB] (average 46 MBps) 00:37:44.839 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:44.839 11:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:44.839 { 00:37:44.839 "subsystems": [ 00:37:44.839 { 00:37:44.839 "subsystem": "bdev", 00:37:44.839 "config": [ 00:37:44.839 { 00:37:44.839 "params": { 00:37:44.839 "trtype": "pcie", 00:37:44.840 "name": "Nvme0", 00:37:44.840 "traddr": "0000:00:10.0" 00:37:44.840 }, 00:37:44.840 "method": "bdev_nvme_attach_controller" 00:37:44.840 }, 00:37:44.840 { 00:37:44.840 "method": "bdev_wait_for_examine" 00:37:44.840 } 00:37:44.840 ] 00:37:44.840 } 00:37:44.840 ] 00:37:44.840 } 00:37:44.840 [2024-05-15 11:31:03.253413] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:44.840 [2024-05-15 11:31:03.253586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78340 ] 00:37:44.840 [2024-05-15 11:31:03.408225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.097 [2024-05-15 11:31:03.658991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.050  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:47.050 00:37:47.050 ************************************ 00:37:47.050 END TEST dd_rw 00:37:47.050 ************************************ 00:37:47.050 00:37:47.050 real 0m44.838s 00:37:47.050 user 0m37.451s 00:37:47.050 sys 0m4.973s 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:47.050 ************************************ 00:37:47.050 START TEST dd_rw_offset 00:37:47.050 ************************************ 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:47.050 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:37:47.051 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=3367v50s1104ci3rmcdw81zyk2evlz0gzgr7ayxdqb3vdv9cqi6cq2r9stl8bdceu675e5w6hf7rx7wc3nnx6twxqv37kj1hybbb7daq670wfgokmoiqao5l485hl6v0spv3ci7nj7cbg94e1nrimzfj6ghl9i55ss1gf5c4qd4phoz8jzsft3el9k1fcvemwj4d1jbcm69o07a1024jd6eyttamskdnpqvc2acnc29422w2j9em2mxrpm9ta4nvmyfpfs9zj7xvcv1xewopp0itf8agwxos1cj3zdhap55lgar2v32e96rr79fo7t7khy2wdb9u1lc2dln64pws6fh9ycktprx3s60v3ay13vetlnqd8ie64ki43uoufq4lsvdt171nnn84prqccsue0jkojinxua3k77vyzogrb5ubf1qfjotv5u7yjl0b5w34qhrqqt5k65xcilbbz5s8k95hveejsk7u7u2qg3y475ggz7af849pz42pdqw6xqbg2s1dlo0hlc6ybn2ckq6x8zgd8f83a8948ik96nlhsrgtu2a28nd4x7aykuxjn1n6op8fpu6gt4skkoz8xe9syb81z0rzdhia3rqzlepjl4ddlelkkob5wgd8soip2bijatjuh6qj2ejd5x3o7x824hw42q2qn1pxe8xy2m28q9stx91y81ybsto7ua3siusgiq4hhovd7la03fu78bumlvmcd551vpw0gbvy643m8hjdlgwhmfkjy27lkl6yuedkzjnw0u25ac85logfkf1pdmlg4higpho0iu5n254wmpaiqyzztm9hhrzxirjsn8wgsw42aucdqqsr57gj7tyei4xvera4953qoy9swokxczpi4ocbsehlhfc7ixsdwuiv8huovp9mwxhkut3sv7j91rjfh68xh9wae6uh7t08wd6lssrlhvwv9m3mi8cxw7ujpku162itv4lfc4ojo806c1yic0h5omzq4i2qegfyhxmjpysl4r169k1sp0fd785yhn2h4oheocqdirf0a5itc6n3qrbn96taf4k8j4d5ae8almn9umt7h1jevrpyqq503dow6n3jntnll69jxmdwkfz6d8xzwdcdsx9n9ywpc6sjs1044mi8m6sepa4ivi92pgtdckpy8z0iz7bze9fbrrp1hfvtqrex3suul7ct4dbk218k1di8kfsfht881tu2b0iqecna573h747alrxsn0j8fi5djac6u1oh2gzh664lyu8l5e29facpnnp9yp490gk6zofj9cnj30870lhwba66forsqqq7ozab3iqnb0rz9w0cjztgjqlu0l530xrg2pwln6ldhcwwx5ax23hyh69ynb8hny79xjr6km7124fhacgdtxvg9q6b07qcoptjv24oj4wydhe3c08u6vjzbos7yy15s6wzj47440rq2snmaiyiqs53bzbk4y02nvlamau2wrbad22rdyh9pzfhjti30fqv3hrspt3r1yaykui2bpyot5nqpj8riy4ia0lccanhxyn8cqy2mifwfvflxwwkkuub8tebzguq2p79ucni3bn508ef0go6p3isspx4s67l73nrnadu0cms9inyfrjra0m6cctqzxk75x7yqmuqqxmfml92jj58dhhs49ur72o6f5ly50vfd28f7647z2p5xrl1mv13qt906g6mas54qq1nganou1jlyoycfy68yktc6pqack6eo3nt9fp9ah123cjd9l9vd1cj1wcgp4vblcinalcolxk8106vrme0joto7wsfskjdrh1fp9k2682iverzultvpk3y0gl6yfzm4xlqyseatu3int34e9rwc9b66hp8m1xbfyl95sgrqjp90mequ3hqrfb2rip3uylw94g2awzkdht6sjl0q5p7dtyfn16mj8b7gkxdxzqmb4joqb3mp882ioxlewqhai99son6tu1naadz2v1sd3p9daxg6cwnd6wb18tb7jdrhyggrb7mfjkty7nogmhb2nuvi9l4f48b7kv77j0p9xvnnaqsa483fdcw65o2g90npqerhs2asovgousuugtpghyj1mprgsd24ov0ncfa5odgovdek2sbsu8oajrz4fvg77zwtbwwsrrhyedavc4z5c51j6wpqui6ku4r493uhov6r1h8tj53ywef3leyj0n746asjj40dkxzhc7lq71z9mdsns6jqq2ynuz7il8huu0d9egacd60smrghubm933omumc7m863jf41h3w6mp9o4rh28xjzfx592x9couqdl99pqs1ul2rbumjytnj49vusy3bh6axo2ecoweipf6jxks0albc8ub0lzkwsfurg47t90m6s3sqlhzhsdyzy5t1pse3bm15bj91i9a9madyzj68pqidh80gutxrdp7a9ipfaylgd6r9fsi7xg2ge8slbw6f058drn9c5bfkts08a8870q9a5e0ncefkip2fc2bnkx93au6uv1bcidz7my62u8lv53cc2xgkj4v7jjvdhd13ab7od8mtm191zisbbb20w3rkghgfrmlwvhe0ykwsd66bh3uabf55vxj6aitp12jlj1kv7oe2pwhnt1ykjvyj2qd7aiaw4xkadc38o83ixj2cyqmmsi335brym2c0wwv96bns939acwl3gwstu35l8aftqieqzenpff84flmbg1xbvrwem61enci58vcx5waovxath48waobpi2hz6tggr5l5q1yngqey211hqj8sbxd4zzt35so5b4t2pnvx183bpbak11958s6up206kvc7q9gaifzwjg8beegivd3b72toiafi6j2l0h8qan49e3x3wxn2398us9wslzk7ug4r2qvq78bnsafg4dakyo687ypeh1ityi304xelpx4b2r317m2wif883f5rm47s3vyknt8tuovbduqej48lqzov86mwczfpff1swrmyt43odpj7qgy2s4r8uxuiqkyaws1e11j2b9pgmb3r4qcmcaw455qzjorhjqs34jdj1cbjm2p0wvs7k8ej75smdkx0w1xm593foxrm60a7u2e8mzsrj0m2ud4h88q5upst44x51tk2yksdnuoay2gz4y31fh8ylbbcxkoef9m1h38ri6nq3zt8wt8qp0lgo8irn4u72hcjo32rulgpzf3p0lrr5777pqspvlbl5aulnxxlol35yjth7rqpgec8tfsp9l0y1fp6dydpb505afura0f2e34zdexzidkdtrqiryfkhgpiid9bik9clyd65sqvsqbe7xn0sk08arlq4d2nv3u3k0eug47troaamdzbyvrc4iekpnnlol5b7qkqbu1d133g4zclzpuyxa91hs7ponera3l55yw0zhawxiqp977v70bp2qod8jovbtp8vfxmxibwhk6py4w8q94zztp7pytm8sgox7w4g34jtvjuztmndt7f0orjql8hanr084kodbm4s21wjhp7zyts2yivybck2bpn150exho2c1cnecm1lrq5yb2fckk32tmcejsnh3me3mr8gtn3chrbbpg05wkmj5phav4nni1exf376cmug9z0u8pqqt4pi8iql8k79gutez3ik3n57qylcfs8kpqv4kcxvsdxuz9q597oim1k02s1531ujvtet5u2v850zotofzan5pv484689vud2t7cc2bng0fxtrzbig7x6lq2pu9b9kib0wckw88f5xpl5j3zyre7bljd65gtx30gvnsendh685tnpfntw56pk6dlsi3g6f5w2orizxizy0ogmbcc72alf6pj7y05hn8moxhuujzrqopym1osab91a711tj1wiylzl1i68a9f4895b1b78ztes7cpa2jh7oesrjinl2f4vdce4l4oi7zrd9c41yt6kimb8qvhd7q5i56mtarlbzhsk7n2v6zuipei7lfr06gxnbr5h4lnzu4nuwx67e255p51hn4ewlc7qn72hx3b52dtj6gclquad6slp6e7f302qc84uxsiihpjg4ztdzdulb8ovlvm24xlemcphrrtdpq8s0emay6br8smwnx43i8a 00:37:47.051 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:37:47.051 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:37:47.051 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:37:47.051 11:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:47.051 { 00:37:47.051 "subsystems": [ 00:37:47.051 { 00:37:47.051 "subsystem": "bdev", 00:37:47.051 "config": [ 00:37:47.051 { 00:37:47.051 "params": { 00:37:47.051 "trtype": "pcie", 00:37:47.051 "name": "Nvme0", 00:37:47.051 "traddr": "0000:00:10.0" 00:37:47.051 }, 00:37:47.051 "method": "bdev_nvme_attach_controller" 00:37:47.051 }, 00:37:47.051 { 00:37:47.051 "method": "bdev_wait_for_examine" 00:37:47.051 } 00:37:47.051 ] 00:37:47.051 } 00:37:47.051 ] 00:37:47.051 } 00:37:47.051 [2024-05-15 11:31:05.601604] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:47.051 [2024-05-15 11:31:05.601783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78392 ] 00:37:47.309 [2024-05-15 11:31:05.753792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.567 [2024-05-15 11:31:05.975605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.196  Copying: 4096/4096 [B] (average 4000 kBps) 00:37:49.196 00:37:49.196 11:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:37:49.196 11:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:37:49.196 11:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:37:49.196 11:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:49.196 { 00:37:49.196 "subsystems": [ 00:37:49.196 { 00:37:49.196 "subsystem": "bdev", 00:37:49.196 "config": [ 00:37:49.196 { 00:37:49.196 "params": { 00:37:49.196 "trtype": "pcie", 00:37:49.196 "name": "Nvme0", 00:37:49.196 "traddr": "0000:00:10.0" 00:37:49.196 }, 00:37:49.196 "method": "bdev_nvme_attach_controller" 00:37:49.196 }, 00:37:49.196 { 00:37:49.196 "method": "bdev_wait_for_examine" 00:37:49.196 } 00:37:49.196 ] 00:37:49.196 } 00:37:49.196 ] 00:37:49.196 } 00:37:49.196 [2024-05-15 11:31:07.813709] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:49.196 [2024-05-15 11:31:07.814156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78428 ] 00:37:49.454 [2024-05-15 11:31:07.965360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.712 [2024-05-15 11:31:08.187347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.651  Copying: 4096/4096 [B] (average 4000 kBps) 00:37:51.651 00:37:51.651 11:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:37:51.651 ************************************ 00:37:51.651 END TEST dd_rw_offset 00:37:51.651 ************************************ 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 3367v50s1104ci3rmcdw81zyk2evlz0gzgr7ayxdqb3vdv9cqi6cq2r9stl8bdceu675e5w6hf7rx7wc3nnx6twxqv37kj1hybbb7daq670wfgokmoiqao5l485hl6v0spv3ci7nj7cbg94e1nrimzfj6ghl9i55ss1gf5c4qd4phoz8jzsft3el9k1fcvemwj4d1jbcm69o07a1024jd6eyttamskdnpqvc2acnc29422w2j9em2mxrpm9ta4nvmyfpfs9zj7xvcv1xewopp0itf8agwxos1cj3zdhap55lgar2v32e96rr79fo7t7khy2wdb9u1lc2dln64pws6fh9ycktprx3s60v3ay13vetlnqd8ie64ki43uoufq4lsvdt171nnn84prqccsue0jkojinxua3k77vyzogrb5ubf1qfjotv5u7yjl0b5w34qhrqqt5k65xcilbbz5s8k95hveejsk7u7u2qg3y475ggz7af849pz42pdqw6xqbg2s1dlo0hlc6ybn2ckq6x8zgd8f83a8948ik96nlhsrgtu2a28nd4x7aykuxjn1n6op8fpu6gt4skkoz8xe9syb81z0rzdhia3rqzlepjl4ddlelkkob5wgd8soip2bijatjuh6qj2ejd5x3o7x824hw42q2qn1pxe8xy2m28q9stx91y81ybsto7ua3siusgiq4hhovd7la03fu78bumlvmcd551vpw0gbvy643m8hjdlgwhmfkjy27lkl6yuedkzjnw0u25ac85logfkf1pdmlg4higpho0iu5n254wmpaiqyzztm9hhrzxirjsn8wgsw42aucdqqsr57gj7tyei4xvera4953qoy9swokxczpi4ocbsehlhfc7ixsdwuiv8huovp9mwxhkut3sv7j91rjfh68xh9wae6uh7t08wd6lssrlhvwv9m3mi8cxw7ujpku162itv4lfc4ojo806c1yic0h5omzq4i2qegfyhxmjpysl4r169k1sp0fd785yhn2h4oheocqdirf0a5itc6n3qrbn96taf4k8j4d5ae8almn9umt7h1jevrpyqq503dow6n3jntnll69jxmdwkfz6d8xzwdcdsx9n9ywpc6sjs1044mi8m6sepa4ivi92pgtdckpy8z0iz7bze9fbrrp1hfvtqrex3suul7ct4dbk218k1di8kfsfht881tu2b0iqecna573h747alrxsn0j8fi5djac6u1oh2gzh664lyu8l5e29facpnnp9yp490gk6zofj9cnj30870lhwba66forsqqq7ozab3iqnb0rz9w0cjztgjqlu0l530xrg2pwln6ldhcwwx5ax23hyh69ynb8hny79xjr6km7124fhacgdtxvg9q6b07qcoptjv24oj4wydhe3c08u6vjzbos7yy15s6wzj47440rq2snmaiyiqs53bzbk4y02nvlamau2wrbad22rdyh9pzfhjti30fqv3hrspt3r1yaykui2bpyot5nqpj8riy4ia0lccanhxyn8cqy2mifwfvflxwwkkuub8tebzguq2p79ucni3bn508ef0go6p3isspx4s67l73nrnadu0cms9inyfrjra0m6cctqzxk75x7yqmuqqxmfml92jj58dhhs49ur72o6f5ly50vfd28f7647z2p5xrl1mv13qt906g6mas54qq1nganou1jlyoycfy68yktc6pqack6eo3nt9fp9ah123cjd9l9vd1cj1wcgp4vblcinalcolxk8106vrme0joto7wsfskjdrh1fp9k2682iverzultvpk3y0gl6yfzm4xlqyseatu3int34e9rwc9b66hp8m1xbfyl95sgrqjp90mequ3hqrfb2rip3uylw94g2awzkdht6sjl0q5p7dtyfn16mj8b7gkxdxzqmb4joqb3mp882ioxlewqhai99son6tu1naadz2v1sd3p9daxg6cwnd6wb18tb7jdrhyggrb7mfjkty7nogmhb2nuvi9l4f48b7kv77j0p9xvnnaqsa483fdcw65o2g90npqerhs2asovgousuugtpghyj1mprgsd24ov0ncfa5odgovdek2sbsu8oajrz4fvg77zwtbwwsrrhyedavc4z5c51j6wpqui6ku4r493uhov6r1h8tj53ywef3leyj0n746asjj40dkxzhc7lq71z9mdsns6jqq2ynuz7il8huu0d9egacd60smrghubm933omumc7m863jf41h3w6mp9o4rh28xjzfx592x9couqdl99pqs1ul2rbumjytnj49vusy3bh6axo2ecoweipf6jxks0albc8ub0lzkwsfurg47t90m6s3sqlhzhsdyzy5t1pse3bm15bj91i9a9madyzj68pqidh80gutxrdp7a9ipfaylgd6r9fsi7xg2ge8slbw6f058drn9c5bfkts08a8870q9a5e0ncefkip2fc2bnkx93au6uv1bcidz7my62u8lv53cc2xgkj4v7jjvdhd13ab7od8mtm191zisbbb20w3rkghgfrmlwvhe0ykwsd66bh3uabf55vxj6aitp12jlj1kv7oe2pwhnt1ykjvyj2qd7aiaw4xkadc38o83ixj2cyqmmsi335brym2c0wwv96bns939acwl3gwstu35l8aftqieqzenpff84flmbg1xbvrwem61enci58vcx5waovxath48waobpi2hz6tggr5l5q1yngqey211hqj8sbxd4zzt35so5b4t2pnvx183bpbak11958s6up206kvc7q9gaifzwjg8beegivd3b72toiafi6j2l0h8qan49e3x3wxn2398us9wslzk7ug4r2qvq78bnsafg4dakyo687ypeh1ityi304xelpx4b2r317m2wif883f5rm47s3vyknt8tuovbduqej48lqzov86mwczfpff1swrmyt43odpj7qgy2s4r8uxuiqkyaws1e11j2b9pgmb3r4qcmcaw455qzjorhjqs34jdj1cbjm2p0wvs7k8ej75smdkx0w1xm593foxrm60a7u2e8mzsrj0m2ud4h88q5upst44x51tk2yksdnuoay2gz4y31fh8ylbbcxkoef9m1h38ri6nq3zt8wt8qp0lgo8irn4u72hcjo32rulgpzf3p0lrr5777pqspvlbl5aulnxxlol35yjth7rqpgec8tfsp9l0y1fp6dydpb505afura0f2e34zdexzidkdtrqiryfkhgpiid9bik9clyd65sqvsqbe7xn0sk08arlq4d2nv3u3k0eug47troaamdzbyvrc4iekpnnlol5b7qkqbu1d133g4zclzpuyxa91hs7ponera3l55yw0zhawxiqp977v70bp2qod8jovbtp8vfxmxibwhk6py4w8q94zztp7pytm8sgox7w4g34jtvjuztmndt7f0orjql8hanr084kodbm4s21wjhp7zyts2yivybck2bpn150exho2c1cnecm1lrq5yb2fckk32tmcejsnh3me3mr8gtn3chrbbpg05wkmj5phav4nni1exf376cmug9z0u8pqqt4pi8iql8k79gutez3ik3n57qylcfs8kpqv4kcxvsdxuz9q597oim1k02s1531ujvtet5u2v850zotofzan5pv484689vud2t7cc2bng0fxtrzbig7x6lq2pu9b9kib0wckw88f5xpl5j3zyre7bljd65gtx30gvnsendh685tnpfntw56pk6dlsi3g6f5w2orizxizy0ogmbcc72alf6pj7y05hn8moxhuujzrqopym1osab91a711tj1wiylzl1i68a9f4895b1b78ztes7cpa2jh7oesrjinl2f4vdce4l4oi7zrd9c41yt6kimb8qvhd7q5i56mtarlbzhsk7n2v6zuipei7lfr06gxnbr5h4lnzu4nuwx67e255p51hn4ewlc7qn72hx3b52dtj6gclquad6slp6e7f302qc84uxsiihpjg4ztdzdulb8ovlvm24xlemcphrrtdpq8s0emay6br8smwnx43i8a == \3\3\6\7\v\5\0\s\1\1\0\4\c\i\3\r\m\c\d\w\8\1\z\y\k\2\e\v\l\z\0\g\z\g\r\7\a\y\x\d\q\b\3\v\d\v\9\c\q\i\6\c\q\2\r\9\s\t\l\8\b\d\c\e\u\6\7\5\e\5\w\6\h\f\7\r\x\7\w\c\3\n\n\x\6\t\w\x\q\v\3\7\k\j\1\h\y\b\b\b\7\d\a\q\6\7\0\w\f\g\o\k\m\o\i\q\a\o\5\l\4\8\5\h\l\6\v\0\s\p\v\3\c\i\7\n\j\7\c\b\g\9\4\e\1\n\r\i\m\z\f\j\6\g\h\l\9\i\5\5\s\s\1\g\f\5\c\4\q\d\4\p\h\o\z\8\j\z\s\f\t\3\e\l\9\k\1\f\c\v\e\m\w\j\4\d\1\j\b\c\m\6\9\o\0\7\a\1\0\2\4\j\d\6\e\y\t\t\a\m\s\k\d\n\p\q\v\c\2\a\c\n\c\2\9\4\2\2\w\2\j\9\e\m\2\m\x\r\p\m\9\t\a\4\n\v\m\y\f\p\f\s\9\z\j\7\x\v\c\v\1\x\e\w\o\p\p\0\i\t\f\8\a\g\w\x\o\s\1\c\j\3\z\d\h\a\p\5\5\l\g\a\r\2\v\3\2\e\9\6\r\r\7\9\f\o\7\t\7\k\h\y\2\w\d\b\9\u\1\l\c\2\d\l\n\6\4\p\w\s\6\f\h\9\y\c\k\t\p\r\x\3\s\6\0\v\3\a\y\1\3\v\e\t\l\n\q\d\8\i\e\6\4\k\i\4\3\u\o\u\f\q\4\l\s\v\d\t\1\7\1\n\n\n\8\4\p\r\q\c\c\s\u\e\0\j\k\o\j\i\n\x\u\a\3\k\7\7\v\y\z\o\g\r\b\5\u\b\f\1\q\f\j\o\t\v\5\u\7\y\j\l\0\b\5\w\3\4\q\h\r\q\q\t\5\k\6\5\x\c\i\l\b\b\z\5\s\8\k\9\5\h\v\e\e\j\s\k\7\u\7\u\2\q\g\3\y\4\7\5\g\g\z\7\a\f\8\4\9\p\z\4\2\p\d\q\w\6\x\q\b\g\2\s\1\d\l\o\0\h\l\c\6\y\b\n\2\c\k\q\6\x\8\z\g\d\8\f\8\3\a\8\9\4\8\i\k\9\6\n\l\h\s\r\g\t\u\2\a\2\8\n\d\4\x\7\a\y\k\u\x\j\n\1\n\6\o\p\8\f\p\u\6\g\t\4\s\k\k\o\z\8\x\e\9\s\y\b\8\1\z\0\r\z\d\h\i\a\3\r\q\z\l\e\p\j\l\4\d\d\l\e\l\k\k\o\b\5\w\g\d\8\s\o\i\p\2\b\i\j\a\t\j\u\h\6\q\j\2\e\j\d\5\x\3\o\7\x\8\2\4\h\w\4\2\q\2\q\n\1\p\x\e\8\x\y\2\m\2\8\q\9\s\t\x\9\1\y\8\1\y\b\s\t\o\7\u\a\3\s\i\u\s\g\i\q\4\h\h\o\v\d\7\l\a\0\3\f\u\7\8\b\u\m\l\v\m\c\d\5\5\1\v\p\w\0\g\b\v\y\6\4\3\m\8\h\j\d\l\g\w\h\m\f\k\j\y\2\7\l\k\l\6\y\u\e\d\k\z\j\n\w\0\u\2\5\a\c\8\5\l\o\g\f\k\f\1\p\d\m\l\g\4\h\i\g\p\h\o\0\i\u\5\n\2\5\4\w\m\p\a\i\q\y\z\z\t\m\9\h\h\r\z\x\i\r\j\s\n\8\w\g\s\w\4\2\a\u\c\d\q\q\s\r\5\7\g\j\7\t\y\e\i\4\x\v\e\r\a\4\9\5\3\q\o\y\9\s\w\o\k\x\c\z\p\i\4\o\c\b\s\e\h\l\h\f\c\7\i\x\s\d\w\u\i\v\8\h\u\o\v\p\9\m\w\x\h\k\u\t\3\s\v\7\j\9\1\r\j\f\h\6\8\x\h\9\w\a\e\6\u\h\7\t\0\8\w\d\6\l\s\s\r\l\h\v\w\v\9\m\3\m\i\8\c\x\w\7\u\j\p\k\u\1\6\2\i\t\v\4\l\f\c\4\o\j\o\8\0\6\c\1\y\i\c\0\h\5\o\m\z\q\4\i\2\q\e\g\f\y\h\x\m\j\p\y\s\l\4\r\1\6\9\k\1\s\p\0\f\d\7\8\5\y\h\n\2\h\4\o\h\e\o\c\q\d\i\r\f\0\a\5\i\t\c\6\n\3\q\r\b\n\9\6\t\a\f\4\k\8\j\4\d\5\a\e\8\a\l\m\n\9\u\m\t\7\h\1\j\e\v\r\p\y\q\q\5\0\3\d\o\w\6\n\3\j\n\t\n\l\l\6\9\j\x\m\d\w\k\f\z\6\d\8\x\z\w\d\c\d\s\x\9\n\9\y\w\p\c\6\s\j\s\1\0\4\4\m\i\8\m\6\s\e\p\a\4\i\v\i\9\2\p\g\t\d\c\k\p\y\8\z\0\i\z\7\b\z\e\9\f\b\r\r\p\1\h\f\v\t\q\r\e\x\3\s\u\u\l\7\c\t\4\d\b\k\2\1\8\k\1\d\i\8\k\f\s\f\h\t\8\8\1\t\u\2\b\0\i\q\e\c\n\a\5\7\3\h\7\4\7\a\l\r\x\s\n\0\j\8\f\i\5\d\j\a\c\6\u\1\o\h\2\g\z\h\6\6\4\l\y\u\8\l\5\e\2\9\f\a\c\p\n\n\p\9\y\p\4\9\0\g\k\6\z\o\f\j\9\c\n\j\3\0\8\7\0\l\h\w\b\a\6\6\f\o\r\s\q\q\q\7\o\z\a\b\3\i\q\n\b\0\r\z\9\w\0\c\j\z\t\g\j\q\l\u\0\l\5\3\0\x\r\g\2\p\w\l\n\6\l\d\h\c\w\w\x\5\a\x\2\3\h\y\h\6\9\y\n\b\8\h\n\y\7\9\x\j\r\6\k\m\7\1\2\4\f\h\a\c\g\d\t\x\v\g\9\q\6\b\0\7\q\c\o\p\t\j\v\2\4\o\j\4\w\y\d\h\e\3\c\0\8\u\6\v\j\z\b\o\s\7\y\y\1\5\s\6\w\z\j\4\7\4\4\0\r\q\2\s\n\m\a\i\y\i\q\s\5\3\b\z\b\k\4\y\0\2\n\v\l\a\m\a\u\2\w\r\b\a\d\2\2\r\d\y\h\9\p\z\f\h\j\t\i\3\0\f\q\v\3\h\r\s\p\t\3\r\1\y\a\y\k\u\i\2\b\p\y\o\t\5\n\q\p\j\8\r\i\y\4\i\a\0\l\c\c\a\n\h\x\y\n\8\c\q\y\2\m\i\f\w\f\v\f\l\x\w\w\k\k\u\u\b\8\t\e\b\z\g\u\q\2\p\7\9\u\c\n\i\3\b\n\5\0\8\e\f\0\g\o\6\p\3\i\s\s\p\x\4\s\6\7\l\7\3\n\r\n\a\d\u\0\c\m\s\9\i\n\y\f\r\j\r\a\0\m\6\c\c\t\q\z\x\k\7\5\x\7\y\q\m\u\q\q\x\m\f\m\l\9\2\j\j\5\8\d\h\h\s\4\9\u\r\7\2\o\6\f\5\l\y\5\0\v\f\d\2\8\f\7\6\4\7\z\2\p\5\x\r\l\1\m\v\1\3\q\t\9\0\6\g\6\m\a\s\5\4\q\q\1\n\g\a\n\o\u\1\j\l\y\o\y\c\f\y\6\8\y\k\t\c\6\p\q\a\c\k\6\e\o\3\n\t\9\f\p\9\a\h\1\2\3\c\j\d\9\l\9\v\d\1\c\j\1\w\c\g\p\4\v\b\l\c\i\n\a\l\c\o\l\x\k\8\1\0\6\v\r\m\e\0\j\o\t\o\7\w\s\f\s\k\j\d\r\h\1\f\p\9\k\2\6\8\2\i\v\e\r\z\u\l\t\v\p\k\3\y\0\g\l\6\y\f\z\m\4\x\l\q\y\s\e\a\t\u\3\i\n\t\3\4\e\9\r\w\c\9\b\6\6\h\p\8\m\1\x\b\f\y\l\9\5\s\g\r\q\j\p\9\0\m\e\q\u\3\h\q\r\f\b\2\r\i\p\3\u\y\l\w\9\4\g\2\a\w\z\k\d\h\t\6\s\j\l\0\q\5\p\7\d\t\y\f\n\1\6\m\j\8\b\7\g\k\x\d\x\z\q\m\b\4\j\o\q\b\3\m\p\8\8\2\i\o\x\l\e\w\q\h\a\i\9\9\s\o\n\6\t\u\1\n\a\a\d\z\2\v\1\s\d\3\p\9\d\a\x\g\6\c\w\n\d\6\w\b\1\8\t\b\7\j\d\r\h\y\g\g\r\b\7\m\f\j\k\t\y\7\n\o\g\m\h\b\2\n\u\v\i\9\l\4\f\4\8\b\7\k\v\7\7\j\0\p\9\x\v\n\n\a\q\s\a\4\8\3\f\d\c\w\6\5\o\2\g\9\0\n\p\q\e\r\h\s\2\a\s\o\v\g\o\u\s\u\u\g\t\p\g\h\y\j\1\m\p\r\g\s\d\2\4\o\v\0\n\c\f\a\5\o\d\g\o\v\d\e\k\2\s\b\s\u\8\o\a\j\r\z\4\f\v\g\7\7\z\w\t\b\w\w\s\r\r\h\y\e\d\a\v\c\4\z\5\c\5\1\j\6\w\p\q\u\i\6\k\u\4\r\4\9\3\u\h\o\v\6\r\1\h\8\t\j\5\3\y\w\e\f\3\l\e\y\j\0\n\7\4\6\a\s\j\j\4\0\d\k\x\z\h\c\7\l\q\7\1\z\9\m\d\s\n\s\6\j\q\q\2\y\n\u\z\7\i\l\8\h\u\u\0\d\9\e\g\a\c\d\6\0\s\m\r\g\h\u\b\m\9\3\3\o\m\u\m\c\7\m\8\6\3\j\f\4\1\h\3\w\6\m\p\9\o\4\r\h\2\8\x\j\z\f\x\5\9\2\x\9\c\o\u\q\d\l\9\9\p\q\s\1\u\l\2\r\b\u\m\j\y\t\n\j\4\9\v\u\s\y\3\b\h\6\a\x\o\2\e\c\o\w\e\i\p\f\6\j\x\k\s\0\a\l\b\c\8\u\b\0\l\z\k\w\s\f\u\r\g\4\7\t\9\0\m\6\s\3\s\q\l\h\z\h\s\d\y\z\y\5\t\1\p\s\e\3\b\m\1\5\b\j\9\1\i\9\a\9\m\a\d\y\z\j\6\8\p\q\i\d\h\8\0\g\u\t\x\r\d\p\7\a\9\i\p\f\a\y\l\g\d\6\r\9\f\s\i\7\x\g\2\g\e\8\s\l\b\w\6\f\0\5\8\d\r\n\9\c\5\b\f\k\t\s\0\8\a\8\8\7\0\q\9\a\5\e\0\n\c\e\f\k\i\p\2\f\c\2\b\n\k\x\9\3\a\u\6\u\v\1\b\c\i\d\z\7\m\y\6\2\u\8\l\v\5\3\c\c\2\x\g\k\j\4\v\7\j\j\v\d\h\d\1\3\a\b\7\o\d\8\m\t\m\1\9\1\z\i\s\b\b\b\2\0\w\3\r\k\g\h\g\f\r\m\l\w\v\h\e\0\y\k\w\s\d\6\6\b\h\3\u\a\b\f\5\5\v\x\j\6\a\i\t\p\1\2\j\l\j\1\k\v\7\o\e\2\p\w\h\n\t\1\y\k\j\v\y\j\2\q\d\7\a\i\a\w\4\x\k\a\d\c\3\8\o\8\3\i\x\j\2\c\y\q\m\m\s\i\3\3\5\b\r\y\m\2\c\0\w\w\v\9\6\b\n\s\9\3\9\a\c\w\l\3\g\w\s\t\u\3\5\l\8\a\f\t\q\i\e\q\z\e\n\p\f\f\8\4\f\l\m\b\g\1\x\b\v\r\w\e\m\6\1\e\n\c\i\5\8\v\c\x\5\w\a\o\v\x\a\t\h\4\8\w\a\o\b\p\i\2\h\z\6\t\g\g\r\5\l\5\q\1\y\n\g\q\e\y\2\1\1\h\q\j\8\s\b\x\d\4\z\z\t\3\5\s\o\5\b\4\t\2\p\n\v\x\1\8\3\b\p\b\a\k\1\1\9\5\8\s\6\u\p\2\0\6\k\v\c\7\q\9\g\a\i\f\z\w\j\g\8\b\e\e\g\i\v\d\3\b\7\2\t\o\i\a\f\i\6\j\2\l\0\h\8\q\a\n\4\9\e\3\x\3\w\x\n\2\3\9\8\u\s\9\w\s\l\z\k\7\u\g\4\r\2\q\v\q\7\8\b\n\s\a\f\g\4\d\a\k\y\o\6\8\7\y\p\e\h\1\i\t\y\i\3\0\4\x\e\l\p\x\4\b\2\r\3\1\7\m\2\w\i\f\8\8\3\f\5\r\m\4\7\s\3\v\y\k\n\t\8\t\u\o\v\b\d\u\q\e\j\4\8\l\q\z\o\v\8\6\m\w\c\z\f\p\f\f\1\s\w\r\m\y\t\4\3\o\d\p\j\7\q\g\y\2\s\4\r\8\u\x\u\i\q\k\y\a\w\s\1\e\1\1\j\2\b\9\p\g\m\b\3\r\4\q\c\m\c\a\w\4\5\5\q\z\j\o\r\h\j\q\s\3\4\j\d\j\1\c\b\j\m\2\p\0\w\v\s\7\k\8\e\j\7\5\s\m\d\k\x\0\w\1\x\m\5\9\3\f\o\x\r\m\6\0\a\7\u\2\e\8\m\z\s\r\j\0\m\2\u\d\4\h\8\8\q\5\u\p\s\t\4\4\x\5\1\t\k\2\y\k\s\d\n\u\o\a\y\2\g\z\4\y\3\1\f\h\8\y\l\b\b\c\x\k\o\e\f\9\m\1\h\3\8\r\i\6\n\q\3\z\t\8\w\t\8\q\p\0\l\g\o\8\i\r\n\4\u\7\2\h\c\j\o\3\2\r\u\l\g\p\z\f\3\p\0\l\r\r\5\7\7\7\p\q\s\p\v\l\b\l\5\a\u\l\n\x\x\l\o\l\3\5\y\j\t\h\7\r\q\p\g\e\c\8\t\f\s\p\9\l\0\y\1\f\p\6\d\y\d\p\b\5\0\5\a\f\u\r\a\0\f\2\e\3\4\z\d\e\x\z\i\d\k\d\t\r\q\i\r\y\f\k\h\g\p\i\i\d\9\b\i\k\9\c\l\y\d\6\5\s\q\v\s\q\b\e\7\x\n\0\s\k\0\8\a\r\l\q\4\d\2\n\v\3\u\3\k\0\e\u\g\4\7\t\r\o\a\a\m\d\z\b\y\v\r\c\4\i\e\k\p\n\n\l\o\l\5\b\7\q\k\q\b\u\1\d\1\3\3\g\4\z\c\l\z\p\u\y\x\a\9\1\h\s\7\p\o\n\e\r\a\3\l\5\5\y\w\0\z\h\a\w\x\i\q\p\9\7\7\v\7\0\b\p\2\q\o\d\8\j\o\v\b\t\p\8\v\f\x\m\x\i\b\w\h\k\6\p\y\4\w\8\q\9\4\z\z\t\p\7\p\y\t\m\8\s\g\o\x\7\w\4\g\3\4\j\t\v\j\u\z\t\m\n\d\t\7\f\0\o\r\j\q\l\8\h\a\n\r\0\8\4\k\o\d\b\m\4\s\2\1\w\j\h\p\7\z\y\t\s\2\y\i\v\y\b\c\k\2\b\p\n\1\5\0\e\x\h\o\2\c\1\c\n\e\c\m\1\l\r\q\5\y\b\2\f\c\k\k\3\2\t\m\c\e\j\s\n\h\3\m\e\3\m\r\8\g\t\n\3\c\h\r\b\b\p\g\0\5\w\k\m\j\5\p\h\a\v\4\n\n\i\1\e\x\f\3\7\6\c\m\u\g\9\z\0\u\8\p\q\q\t\4\p\i\8\i\q\l\8\k\7\9\g\u\t\e\z\3\i\k\3\n\5\7\q\y\l\c\f\s\8\k\p\q\v\4\k\c\x\v\s\d\x\u\z\9\q\5\9\7\o\i\m\1\k\0\2\s\1\5\3\1\u\j\v\t\e\t\5\u\2\v\8\5\0\z\o\t\o\f\z\a\n\5\p\v\4\8\4\6\8\9\v\u\d\2\t\7\c\c\2\b\n\g\0\f\x\t\r\z\b\i\g\7\x\6\l\q\2\p\u\9\b\9\k\i\b\0\w\c\k\w\8\8\f\5\x\p\l\5\j\3\z\y\r\e\7\b\l\j\d\6\5\g\t\x\3\0\g\v\n\s\e\n\d\h\6\8\5\t\n\p\f\n\t\w\5\6\p\k\6\d\l\s\i\3\g\6\f\5\w\2\o\r\i\z\x\i\z\y\0\o\g\m\b\c\c\7\2\a\l\f\6\p\j\7\y\0\5\h\n\8\m\o\x\h\u\u\j\z\r\q\o\p\y\m\1\o\s\a\b\9\1\a\7\1\1\t\j\1\w\i\y\l\z\l\1\i\6\8\a\9\f\4\8\9\5\b\1\b\7\8\z\t\e\s\7\c\p\a\2\j\h\7\o\e\s\r\j\i\n\l\2\f\4\v\d\c\e\4\l\4\o\i\7\z\r\d\9\c\4\1\y\t\6\k\i\m\b\8\q\v\h\d\7\q\5\i\5\6\m\t\a\r\l\b\z\h\s\k\7\n\2\v\6\z\u\i\p\e\i\7\l\f\r\0\6\g\x\n\b\r\5\h\4\l\n\z\u\4\n\u\w\x\6\7\e\2\5\5\p\5\1\h\n\4\e\w\l\c\7\q\n\7\2\h\x\3\b\5\2\d\t\j\6\g\c\l\q\u\a\d\6\s\l\p\6\e\7\f\3\0\2\q\c\8\4\u\x\s\i\i\h\p\j\g\4\z\t\d\z\d\u\l\b\8\o\v\l\v\m\2\4\x\l\e\m\c\p\h\r\r\t\d\p\q\8\s\0\e\m\a\y\6\b\r\8\s\m\w\n\x\4\3\i\8\a ]] 00:37:51.652 00:37:51.652 real 0m4.496s 00:37:51.652 user 0m3.752s 00:37:51.652 sys 0m0.484s 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:51.652 11:31:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:51.652 { 00:37:51.652 "subsystems": [ 00:37:51.652 { 00:37:51.652 "subsystem": "bdev", 00:37:51.652 "config": [ 00:37:51.652 { 00:37:51.652 "params": { 00:37:51.652 "trtype": "pcie", 00:37:51.652 "name": "Nvme0", 00:37:51.652 "traddr": "0000:00:10.0" 00:37:51.652 }, 00:37:51.652 "method": "bdev_nvme_attach_controller" 00:37:51.652 }, 00:37:51.652 { 00:37:51.652 "method": "bdev_wait_for_examine" 00:37:51.652 } 00:37:51.652 ] 00:37:51.652 } 00:37:51.652 ] 00:37:51.652 } 00:37:51.652 [2024-05-15 11:31:10.060513] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:51.652 [2024-05-15 11:31:10.060693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78477 ] 00:37:51.652 [2024-05-15 11:31:10.235581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.911 [2024-05-15 11:31:10.456960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.854  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:53.854 00:37:53.854 11:31:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:53.854 ************************************ 00:37:53.854 END TEST spdk_dd_basic_rw 00:37:53.854 ************************************ 00:37:53.854 00:37:53.854 real 0m54.374s 00:37:53.854 user 0m45.031s 00:37:53.854 sys 0m6.208s 00:37:53.854 11:31:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:53.854 11:31:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:53.854 11:31:12 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:37:53.854 11:31:12 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:53.854 11:31:12 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:53.854 11:31:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:53.854 ************************************ 00:37:53.854 START TEST spdk_dd_posix 00:37:53.854 ************************************ 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:37:53.854 * Looking for test storage... 00:37:53.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:37:53.854 * First test run, using AIO 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:37:53.854 ************************************ 00:37:53.854 START TEST dd_flag_append 00:37:53.854 ************************************ 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=3aqfo6cmk1ob9wp8yrib4me3qt61w6a5 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=w1m31piyitcc29qngebnrhtcampluj32 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 3aqfo6cmk1ob9wp8yrib4me3qt61w6a5 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s w1m31piyitcc29qngebnrhtcampluj32 00:37:53.854 11:31:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:37:53.854 [2024-05-15 11:31:12.428537] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:53.854 [2024-05-15 11:31:12.428715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78573 ] 00:37:54.112 [2024-05-15 11:31:12.596615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.370 [2024-05-15 11:31:12.850490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.012  Copying: 32/32 [B] (average 31 kBps) 00:37:56.012 00:37:56.012 ************************************ 00:37:56.012 END TEST dd_flag_append 00:37:56.012 ************************************ 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ w1m31piyitcc29qngebnrhtcampluj323aqfo6cmk1ob9wp8yrib4me3qt61w6a5 == \w\1\m\3\1\p\i\y\i\t\c\c\2\9\q\n\g\e\b\n\r\h\t\c\a\m\p\l\u\j\3\2\3\a\q\f\o\6\c\m\k\1\o\b\9\w\p\8\y\r\i\b\4\m\e\3\q\t\6\1\w\6\a\5 ]] 00:37:56.012 00:37:56.012 real 0m2.150s 00:37:56.012 user 0m1.723s 00:37:56.012 sys 0m0.224s 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:37:56.012 ************************************ 00:37:56.012 START TEST dd_flag_directory 00:37:56.012 ************************************ 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:56.012 11:31:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:56.012 [2024-05-15 11:31:14.622932] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:56.012 [2024-05-15 11:31:14.623108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78626 ] 00:37:56.271 [2024-05-15 11:31:14.775474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.530 [2024-05-15 11:31:15.002858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.788 [2024-05-15 11:31:15.365768] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:37:56.788 [2024-05-15 11:31:15.366027] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:37:56.788 [2024-05-15 11:31:15.366084] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:57.726 [2024-05-15 11:31:16.234556] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:58.293 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:58.294 11:31:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:37:58.294 [2024-05-15 11:31:16.775646] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:37:58.294 [2024-05-15 11:31:16.776083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78658 ] 00:37:58.294 [2024-05-15 11:31:16.926942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.552 [2024-05-15 11:31:17.148444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.120 [2024-05-15 11:31:17.504646] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:37:59.120 [2024-05-15 11:31:17.504731] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:37:59.120 [2024-05-15 11:31:17.504766] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:00.056 [2024-05-15 11:31:18.339643] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:00.315 ************************************ 00:38:00.315 END TEST dd_flag_directory 00:38:00.315 ************************************ 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:00.315 00:38:00.315 real 0m4.249s 00:38:00.315 user 0m3.390s 00:38:00.315 sys 0m0.462s 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:00.315 ************************************ 00:38:00.315 START TEST dd_flag_nofollow 00:38:00.315 ************************************ 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:00.315 11:31:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:00.315 [2024-05-15 11:31:18.929521] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:00.315 [2024-05-15 11:31:18.929695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78704 ] 00:38:00.574 [2024-05-15 11:31:19.104396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.832 [2024-05-15 11:31:19.319249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.091 [2024-05-15 11:31:19.666416] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:01.091 [2024-05-15 11:31:19.666514] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:01.091 [2024-05-15 11:31:19.666547] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:02.026 [2024-05-15 11:31:20.514246] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:02.284 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:02.285 11:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:02.542 [2024-05-15 11:31:21.038718] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:02.542 [2024-05-15 11:31:21.039139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78741 ] 00:38:02.801 [2024-05-15 11:31:21.204040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.801 [2024-05-15 11:31:21.427058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.367 [2024-05-15 11:31:21.791366] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:03.367 [2024-05-15 11:31:21.791446] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:03.367 [2024-05-15 11:31:21.791482] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:04.302 [2024-05-15 11:31:22.643035] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:04.561 11:31:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:04.561 [2024-05-15 11:31:23.183603] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:04.561 [2024-05-15 11:31:23.183803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:38:04.819 [2024-05-15 11:31:23.335282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.078 [2024-05-15 11:31:23.557214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.710  Copying: 512/512 [B] (average 500 kBps) 00:38:06.710 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0irzfa3422nr2x5g7zjjhtrkpgsipzfews3tths12u3an56qpa3eplw7zjcsjttl3pcmwjvax5jsr9vnaf82uwxc562mldwfv42knotpf115jrcet3b07qa9pnhatgz6pdt6o6sh5ues43lldm284yumytkzantirugx5hzex4fkkmwd8u501dfhi8v7cwdcf610mw8hgxq6h0ahx1wez15a005zecjcc04tzizja120601etkt9wwhu65rxyc446ly3virc7ea7ui1w4vmi0jaqhy71yxvb64ryzxoiv91s05io4jb2e0k7cwlrf7eqe0adn1kb3sg48ikua3axoycmxji5lo43rev4u2cndss2qhvwit6urid9fbadish7nd3x4kns32ixu2ay3zzhyl692vbyogcotjfpu5nx6fih0ttk6xnsw05hg1krekg33doko1g6p2fdkore9qkwkn9j8ep956rq0o8oxy66qmra4h7hbjuf5yvccus2f76l == \0\i\r\z\f\a\3\4\2\2\n\r\2\x\5\g\7\z\j\j\h\t\r\k\p\g\s\i\p\z\f\e\w\s\3\t\t\h\s\1\2\u\3\a\n\5\6\q\p\a\3\e\p\l\w\7\z\j\c\s\j\t\t\l\3\p\c\m\w\j\v\a\x\5\j\s\r\9\v\n\a\f\8\2\u\w\x\c\5\6\2\m\l\d\w\f\v\4\2\k\n\o\t\p\f\1\1\5\j\r\c\e\t\3\b\0\7\q\a\9\p\n\h\a\t\g\z\6\p\d\t\6\o\6\s\h\5\u\e\s\4\3\l\l\d\m\2\8\4\y\u\m\y\t\k\z\a\n\t\i\r\u\g\x\5\h\z\e\x\4\f\k\k\m\w\d\8\u\5\0\1\d\f\h\i\8\v\7\c\w\d\c\f\6\1\0\m\w\8\h\g\x\q\6\h\0\a\h\x\1\w\e\z\1\5\a\0\0\5\z\e\c\j\c\c\0\4\t\z\i\z\j\a\1\2\0\6\0\1\e\t\k\t\9\w\w\h\u\6\5\r\x\y\c\4\4\6\l\y\3\v\i\r\c\7\e\a\7\u\i\1\w\4\v\m\i\0\j\a\q\h\y\7\1\y\x\v\b\6\4\r\y\z\x\o\i\v\9\1\s\0\5\i\o\4\j\b\2\e\0\k\7\c\w\l\r\f\7\e\q\e\0\a\d\n\1\k\b\3\s\g\4\8\i\k\u\a\3\a\x\o\y\c\m\x\j\i\5\l\o\4\3\r\e\v\4\u\2\c\n\d\s\s\2\q\h\v\w\i\t\6\u\r\i\d\9\f\b\a\d\i\s\h\7\n\d\3\x\4\k\n\s\3\2\i\x\u\2\a\y\3\z\z\h\y\l\6\9\2\v\b\y\o\g\c\o\t\j\f\p\u\5\n\x\6\f\i\h\0\t\t\k\6\x\n\s\w\0\5\h\g\1\k\r\e\k\g\3\3\d\o\k\o\1\g\6\p\2\f\d\k\o\r\e\9\q\k\w\k\n\9\j\8\e\p\9\5\6\r\q\0\o\8\o\x\y\6\6\q\m\r\a\4\h\7\h\b\j\u\f\5\y\v\c\c\u\s\2\f\7\6\l ]] 00:38:06.711 00:38:06.711 real 0m6.402s 00:38:06.711 user 0m5.104s 00:38:06.711 sys 0m0.700s 00:38:06.711 ************************************ 00:38:06.711 END TEST dd_flag_nofollow 00:38:06.711 ************************************ 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:06.711 ************************************ 00:38:06.711 START TEST dd_flag_noatime 00:38:06.711 ************************************ 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715772683 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715772685 00:38:06.711 11:31:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:38:07.644 11:31:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:07.902 [2024-05-15 11:31:26.393062] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:07.902 [2024-05-15 11:31:26.393270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78832 ] 00:38:08.169 [2024-05-15 11:31:26.564661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.459 [2024-05-15 11:31:26.816403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.093  Copying: 512/512 [B] (average 500 kBps) 00:38:10.093 00:38:10.093 11:31:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:10.093 11:31:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715772683 )) 00:38:10.093 11:31:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:10.093 11:31:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715772685 )) 00:38:10.093 11:31:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:10.093 [2024-05-15 11:31:28.552164] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:10.093 [2024-05-15 11:31:28.552337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78862 ] 00:38:10.093 [2024-05-15 11:31:28.707640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.352 [2024-05-15 11:31:28.925356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.292  Copying: 512/512 [B] (average 500 kBps) 00:38:12.292 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:12.292 ************************************ 00:38:12.292 END TEST dd_flag_noatime 00:38:12.292 ************************************ 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715772689 )) 00:38:12.292 00:38:12.292 real 0m5.292s 00:38:12.292 user 0m3.411s 00:38:12.292 sys 0m0.476s 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:12.292 ************************************ 00:38:12.292 START TEST dd_flags_misc 00:38:12.292 ************************************ 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:38:12.292 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:12.293 11:31:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:12.293 [2024-05-15 11:31:30.702472] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:12.293 [2024-05-15 11:31:30.702660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78916 ] 00:38:12.293 [2024-05-15 11:31:30.859509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.550 [2024-05-15 11:31:31.175053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.487  Copying: 512/512 [B] (average 500 kBps) 00:38:14.487 00:38:14.487 11:31:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lqvkk6cb6h122t4fnwqlkiu5wmbzsefo4sat3zhxvq367rcbbh5h5p66d0crrehqools89tybsnbtsi2sfk8uagqrjhtjezt2ne84dz29s2rprruein3fg184gm9vub9c0ip7zrxlecbw8ypfd3494hc6w4gnahd91ctglwv11nem87ptsq8hkv1j0a8nxbjs9qa1eor0tcev53fnqszyiuoxerc2xlu8hzr3h4mfdbkl2paf7xyq070z22orb0gq300u5sq7tesjhc2e739bkf4zgiqs0eenze7ew99ls28qlp1p72zvjfm1oqeil6ynp1n4uvx13m6zwah5o67hxsn5s4u1zaxdz5qg5e60mwns1vc0a3cii9kb2aeg07rdf162d9pigmm1ylhp6t0fmy7krygeag3wz9s8suy450syite0z11ic838n935xizgievxy9x3hq26gca7lf16um4ooe9s4qbsc3guy28krhlq9kuuxwdk3pkbp3kr51m == \l\q\v\k\k\6\c\b\6\h\1\2\2\t\4\f\n\w\q\l\k\i\u\5\w\m\b\z\s\e\f\o\4\s\a\t\3\z\h\x\v\q\3\6\7\r\c\b\b\h\5\h\5\p\6\6\d\0\c\r\r\e\h\q\o\o\l\s\8\9\t\y\b\s\n\b\t\s\i\2\s\f\k\8\u\a\g\q\r\j\h\t\j\e\z\t\2\n\e\8\4\d\z\2\9\s\2\r\p\r\r\u\e\i\n\3\f\g\1\8\4\g\m\9\v\u\b\9\c\0\i\p\7\z\r\x\l\e\c\b\w\8\y\p\f\d\3\4\9\4\h\c\6\w\4\g\n\a\h\d\9\1\c\t\g\l\w\v\1\1\n\e\m\8\7\p\t\s\q\8\h\k\v\1\j\0\a\8\n\x\b\j\s\9\q\a\1\e\o\r\0\t\c\e\v\5\3\f\n\q\s\z\y\i\u\o\x\e\r\c\2\x\l\u\8\h\z\r\3\h\4\m\f\d\b\k\l\2\p\a\f\7\x\y\q\0\7\0\z\2\2\o\r\b\0\g\q\3\0\0\u\5\s\q\7\t\e\s\j\h\c\2\e\7\3\9\b\k\f\4\z\g\i\q\s\0\e\e\n\z\e\7\e\w\9\9\l\s\2\8\q\l\p\1\p\7\2\z\v\j\f\m\1\o\q\e\i\l\6\y\n\p\1\n\4\u\v\x\1\3\m\6\z\w\a\h\5\o\6\7\h\x\s\n\5\s\4\u\1\z\a\x\d\z\5\q\g\5\e\6\0\m\w\n\s\1\v\c\0\a\3\c\i\i\9\k\b\2\a\e\g\0\7\r\d\f\1\6\2\d\9\p\i\g\m\m\1\y\l\h\p\6\t\0\f\m\y\7\k\r\y\g\e\a\g\3\w\z\9\s\8\s\u\y\4\5\0\s\y\i\t\e\0\z\1\1\i\c\8\3\8\n\9\3\5\x\i\z\g\i\e\v\x\y\9\x\3\h\q\2\6\g\c\a\7\l\f\1\6\u\m\4\o\o\e\9\s\4\q\b\s\c\3\g\u\y\2\8\k\r\h\l\q\9\k\u\u\x\w\d\k\3\p\k\b\p\3\k\r\5\1\m ]] 00:38:14.487 11:31:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:14.487 11:31:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:14.487 [2024-05-15 11:31:32.893262] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:14.487 [2024-05-15 11:31:32.893443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78949 ] 00:38:14.487 [2024-05-15 11:31:33.053016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.744 [2024-05-15 11:31:33.266122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.378  Copying: 512/512 [B] (average 500 kBps) 00:38:16.378 00:38:16.378 11:31:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lqvkk6cb6h122t4fnwqlkiu5wmbzsefo4sat3zhxvq367rcbbh5h5p66d0crrehqools89tybsnbtsi2sfk8uagqrjhtjezt2ne84dz29s2rprruein3fg184gm9vub9c0ip7zrxlecbw8ypfd3494hc6w4gnahd91ctglwv11nem87ptsq8hkv1j0a8nxbjs9qa1eor0tcev53fnqszyiuoxerc2xlu8hzr3h4mfdbkl2paf7xyq070z22orb0gq300u5sq7tesjhc2e739bkf4zgiqs0eenze7ew99ls28qlp1p72zvjfm1oqeil6ynp1n4uvx13m6zwah5o67hxsn5s4u1zaxdz5qg5e60mwns1vc0a3cii9kb2aeg07rdf162d9pigmm1ylhp6t0fmy7krygeag3wz9s8suy450syite0z11ic838n935xizgievxy9x3hq26gca7lf16um4ooe9s4qbsc3guy28krhlq9kuuxwdk3pkbp3kr51m == \l\q\v\k\k\6\c\b\6\h\1\2\2\t\4\f\n\w\q\l\k\i\u\5\w\m\b\z\s\e\f\o\4\s\a\t\3\z\h\x\v\q\3\6\7\r\c\b\b\h\5\h\5\p\6\6\d\0\c\r\r\e\h\q\o\o\l\s\8\9\t\y\b\s\n\b\t\s\i\2\s\f\k\8\u\a\g\q\r\j\h\t\j\e\z\t\2\n\e\8\4\d\z\2\9\s\2\r\p\r\r\u\e\i\n\3\f\g\1\8\4\g\m\9\v\u\b\9\c\0\i\p\7\z\r\x\l\e\c\b\w\8\y\p\f\d\3\4\9\4\h\c\6\w\4\g\n\a\h\d\9\1\c\t\g\l\w\v\1\1\n\e\m\8\7\p\t\s\q\8\h\k\v\1\j\0\a\8\n\x\b\j\s\9\q\a\1\e\o\r\0\t\c\e\v\5\3\f\n\q\s\z\y\i\u\o\x\e\r\c\2\x\l\u\8\h\z\r\3\h\4\m\f\d\b\k\l\2\p\a\f\7\x\y\q\0\7\0\z\2\2\o\r\b\0\g\q\3\0\0\u\5\s\q\7\t\e\s\j\h\c\2\e\7\3\9\b\k\f\4\z\g\i\q\s\0\e\e\n\z\e\7\e\w\9\9\l\s\2\8\q\l\p\1\p\7\2\z\v\j\f\m\1\o\q\e\i\l\6\y\n\p\1\n\4\u\v\x\1\3\m\6\z\w\a\h\5\o\6\7\h\x\s\n\5\s\4\u\1\z\a\x\d\z\5\q\g\5\e\6\0\m\w\n\s\1\v\c\0\a\3\c\i\i\9\k\b\2\a\e\g\0\7\r\d\f\1\6\2\d\9\p\i\g\m\m\1\y\l\h\p\6\t\0\f\m\y\7\k\r\y\g\e\a\g\3\w\z\9\s\8\s\u\y\4\5\0\s\y\i\t\e\0\z\1\1\i\c\8\3\8\n\9\3\5\x\i\z\g\i\e\v\x\y\9\x\3\h\q\2\6\g\c\a\7\l\f\1\6\u\m\4\o\o\e\9\s\4\q\b\s\c\3\g\u\y\2\8\k\r\h\l\q\9\k\u\u\x\w\d\k\3\p\k\b\p\3\k\r\5\1\m ]] 00:38:16.378 11:31:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:16.378 11:31:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:16.378 [2024-05-15 11:31:34.993140] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:16.378 [2024-05-15 11:31:34.993344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78977 ] 00:38:16.635 [2024-05-15 11:31:35.151965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.893 [2024-05-15 11:31:35.405228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.830  Copying: 512/512 [B] (average 250 kBps) 00:38:18.830 00:38:18.830 11:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lqvkk6cb6h122t4fnwqlkiu5wmbzsefo4sat3zhxvq367rcbbh5h5p66d0crrehqools89tybsnbtsi2sfk8uagqrjhtjezt2ne84dz29s2rprruein3fg184gm9vub9c0ip7zrxlecbw8ypfd3494hc6w4gnahd91ctglwv11nem87ptsq8hkv1j0a8nxbjs9qa1eor0tcev53fnqszyiuoxerc2xlu8hzr3h4mfdbkl2paf7xyq070z22orb0gq300u5sq7tesjhc2e739bkf4zgiqs0eenze7ew99ls28qlp1p72zvjfm1oqeil6ynp1n4uvx13m6zwah5o67hxsn5s4u1zaxdz5qg5e60mwns1vc0a3cii9kb2aeg07rdf162d9pigmm1ylhp6t0fmy7krygeag3wz9s8suy450syite0z11ic838n935xizgievxy9x3hq26gca7lf16um4ooe9s4qbsc3guy28krhlq9kuuxwdk3pkbp3kr51m == \l\q\v\k\k\6\c\b\6\h\1\2\2\t\4\f\n\w\q\l\k\i\u\5\w\m\b\z\s\e\f\o\4\s\a\t\3\z\h\x\v\q\3\6\7\r\c\b\b\h\5\h\5\p\6\6\d\0\c\r\r\e\h\q\o\o\l\s\8\9\t\y\b\s\n\b\t\s\i\2\s\f\k\8\u\a\g\q\r\j\h\t\j\e\z\t\2\n\e\8\4\d\z\2\9\s\2\r\p\r\r\u\e\i\n\3\f\g\1\8\4\g\m\9\v\u\b\9\c\0\i\p\7\z\r\x\l\e\c\b\w\8\y\p\f\d\3\4\9\4\h\c\6\w\4\g\n\a\h\d\9\1\c\t\g\l\w\v\1\1\n\e\m\8\7\p\t\s\q\8\h\k\v\1\j\0\a\8\n\x\b\j\s\9\q\a\1\e\o\r\0\t\c\e\v\5\3\f\n\q\s\z\y\i\u\o\x\e\r\c\2\x\l\u\8\h\z\r\3\h\4\m\f\d\b\k\l\2\p\a\f\7\x\y\q\0\7\0\z\2\2\o\r\b\0\g\q\3\0\0\u\5\s\q\7\t\e\s\j\h\c\2\e\7\3\9\b\k\f\4\z\g\i\q\s\0\e\e\n\z\e\7\e\w\9\9\l\s\2\8\q\l\p\1\p\7\2\z\v\j\f\m\1\o\q\e\i\l\6\y\n\p\1\n\4\u\v\x\1\3\m\6\z\w\a\h\5\o\6\7\h\x\s\n\5\s\4\u\1\z\a\x\d\z\5\q\g\5\e\6\0\m\w\n\s\1\v\c\0\a\3\c\i\i\9\k\b\2\a\e\g\0\7\r\d\f\1\6\2\d\9\p\i\g\m\m\1\y\l\h\p\6\t\0\f\m\y\7\k\r\y\g\e\a\g\3\w\z\9\s\8\s\u\y\4\5\0\s\y\i\t\e\0\z\1\1\i\c\8\3\8\n\9\3\5\x\i\z\g\i\e\v\x\y\9\x\3\h\q\2\6\g\c\a\7\l\f\1\6\u\m\4\o\o\e\9\s\4\q\b\s\c\3\g\u\y\2\8\k\r\h\l\q\9\k\u\u\x\w\d\k\3\p\k\b\p\3\k\r\5\1\m ]] 00:38:18.830 11:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:18.830 11:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:18.830 [2024-05-15 11:31:37.290225] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:18.830 [2024-05-15 11:31:37.290435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79002 ] 00:38:18.830 [2024-05-15 11:31:37.447783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.088 [2024-05-15 11:31:37.706201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.050  Copying: 512/512 [B] (average 250 kBps) 00:38:21.050 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lqvkk6cb6h122t4fnwqlkiu5wmbzsefo4sat3zhxvq367rcbbh5h5p66d0crrehqools89tybsnbtsi2sfk8uagqrjhtjezt2ne84dz29s2rprruein3fg184gm9vub9c0ip7zrxlecbw8ypfd3494hc6w4gnahd91ctglwv11nem87ptsq8hkv1j0a8nxbjs9qa1eor0tcev53fnqszyiuoxerc2xlu8hzr3h4mfdbkl2paf7xyq070z22orb0gq300u5sq7tesjhc2e739bkf4zgiqs0eenze7ew99ls28qlp1p72zvjfm1oqeil6ynp1n4uvx13m6zwah5o67hxsn5s4u1zaxdz5qg5e60mwns1vc0a3cii9kb2aeg07rdf162d9pigmm1ylhp6t0fmy7krygeag3wz9s8suy450syite0z11ic838n935xizgievxy9x3hq26gca7lf16um4ooe9s4qbsc3guy28krhlq9kuuxwdk3pkbp3kr51m == \l\q\v\k\k\6\c\b\6\h\1\2\2\t\4\f\n\w\q\l\k\i\u\5\w\m\b\z\s\e\f\o\4\s\a\t\3\z\h\x\v\q\3\6\7\r\c\b\b\h\5\h\5\p\6\6\d\0\c\r\r\e\h\q\o\o\l\s\8\9\t\y\b\s\n\b\t\s\i\2\s\f\k\8\u\a\g\q\r\j\h\t\j\e\z\t\2\n\e\8\4\d\z\2\9\s\2\r\p\r\r\u\e\i\n\3\f\g\1\8\4\g\m\9\v\u\b\9\c\0\i\p\7\z\r\x\l\e\c\b\w\8\y\p\f\d\3\4\9\4\h\c\6\w\4\g\n\a\h\d\9\1\c\t\g\l\w\v\1\1\n\e\m\8\7\p\t\s\q\8\h\k\v\1\j\0\a\8\n\x\b\j\s\9\q\a\1\e\o\r\0\t\c\e\v\5\3\f\n\q\s\z\y\i\u\o\x\e\r\c\2\x\l\u\8\h\z\r\3\h\4\m\f\d\b\k\l\2\p\a\f\7\x\y\q\0\7\0\z\2\2\o\r\b\0\g\q\3\0\0\u\5\s\q\7\t\e\s\j\h\c\2\e\7\3\9\b\k\f\4\z\g\i\q\s\0\e\e\n\z\e\7\e\w\9\9\l\s\2\8\q\l\p\1\p\7\2\z\v\j\f\m\1\o\q\e\i\l\6\y\n\p\1\n\4\u\v\x\1\3\m\6\z\w\a\h\5\o\6\7\h\x\s\n\5\s\4\u\1\z\a\x\d\z\5\q\g\5\e\6\0\m\w\n\s\1\v\c\0\a\3\c\i\i\9\k\b\2\a\e\g\0\7\r\d\f\1\6\2\d\9\p\i\g\m\m\1\y\l\h\p\6\t\0\f\m\y\7\k\r\y\g\e\a\g\3\w\z\9\s\8\s\u\y\4\5\0\s\y\i\t\e\0\z\1\1\i\c\8\3\8\n\9\3\5\x\i\z\g\i\e\v\x\y\9\x\3\h\q\2\6\g\c\a\7\l\f\1\6\u\m\4\o\o\e\9\s\4\q\b\s\c\3\g\u\y\2\8\k\r\h\l\q\9\k\u\u\x\w\d\k\3\p\k\b\p\3\k\r\5\1\m ]] 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:21.050 11:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:21.050 [2024-05-15 11:31:39.596400] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:21.050 [2024-05-15 11:31:39.596608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79031 ] 00:38:21.308 [2024-05-15 11:31:39.759301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.566 [2024-05-15 11:31:40.026834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.199  Copying: 512/512 [B] (average 500 kBps) 00:38:23.199 00:38:23.199 11:31:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ewa81f4d3jr9qkky9qkpzvisjf4eauf68i6wef7ygp2gm786lxf3dtyhw5twf4h1pcaez2k77to8z9rqxvpqtsxqpeww9gydbt1kng7f97dk3uqy1x186u6vi2qtu528wzn3qnl76r0lhyk7xh7xikpeju8rreqauflgzrwaoj1oro0swjz23gaols2u2epmzu91mahykfymn8sv2rwt2yvntp3ir6me5a7vjk0hhy3jwap0a5abwec85n6svcaokljny513t6ijmmln6jt4nnb0fvhgyfxug1k6id299owigesudwk27eu7iy3scmt7z9x1a9kkptat4p9icfpkhqabce9g9n109mqo7iceabjt8tzzihttyc78x80w7sbjpdouf42u1ysopsbx7kf62g5oqndhwh2ti64b86isfphm59vb3c6tjs80sc04rfnl6cakmsjioqt4yi8ssdt6v4hvv7duvy96fsng10araf2gbjp1chtjnmvlv8e9pvys == \e\w\a\8\1\f\4\d\3\j\r\9\q\k\k\y\9\q\k\p\z\v\i\s\j\f\4\e\a\u\f\6\8\i\6\w\e\f\7\y\g\p\2\g\m\7\8\6\l\x\f\3\d\t\y\h\w\5\t\w\f\4\h\1\p\c\a\e\z\2\k\7\7\t\o\8\z\9\r\q\x\v\p\q\t\s\x\q\p\e\w\w\9\g\y\d\b\t\1\k\n\g\7\f\9\7\d\k\3\u\q\y\1\x\1\8\6\u\6\v\i\2\q\t\u\5\2\8\w\z\n\3\q\n\l\7\6\r\0\l\h\y\k\7\x\h\7\x\i\k\p\e\j\u\8\r\r\e\q\a\u\f\l\g\z\r\w\a\o\j\1\o\r\o\0\s\w\j\z\2\3\g\a\o\l\s\2\u\2\e\p\m\z\u\9\1\m\a\h\y\k\f\y\m\n\8\s\v\2\r\w\t\2\y\v\n\t\p\3\i\r\6\m\e\5\a\7\v\j\k\0\h\h\y\3\j\w\a\p\0\a\5\a\b\w\e\c\8\5\n\6\s\v\c\a\o\k\l\j\n\y\5\1\3\t\6\i\j\m\m\l\n\6\j\t\4\n\n\b\0\f\v\h\g\y\f\x\u\g\1\k\6\i\d\2\9\9\o\w\i\g\e\s\u\d\w\k\2\7\e\u\7\i\y\3\s\c\m\t\7\z\9\x\1\a\9\k\k\p\t\a\t\4\p\9\i\c\f\p\k\h\q\a\b\c\e\9\g\9\n\1\0\9\m\q\o\7\i\c\e\a\b\j\t\8\t\z\z\i\h\t\t\y\c\7\8\x\8\0\w\7\s\b\j\p\d\o\u\f\4\2\u\1\y\s\o\p\s\b\x\7\k\f\6\2\g\5\o\q\n\d\h\w\h\2\t\i\6\4\b\8\6\i\s\f\p\h\m\5\9\v\b\3\c\6\t\j\s\8\0\s\c\0\4\r\f\n\l\6\c\a\k\m\s\j\i\o\q\t\4\y\i\8\s\s\d\t\6\v\4\h\v\v\7\d\u\v\y\9\6\f\s\n\g\1\0\a\r\a\f\2\g\b\j\p\1\c\h\t\j\n\m\v\l\v\8\e\9\p\v\y\s ]] 00:38:23.199 11:31:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:23.199 11:31:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:23.458 [2024-05-15 11:31:41.899234] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:23.458 [2024-05-15 11:31:41.899500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79070 ] 00:38:23.458 [2024-05-15 11:31:42.064545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.716 [2024-05-15 11:31:42.319250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.657  Copying: 512/512 [B] (average 500 kBps) 00:38:25.657 00:38:25.657 11:31:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ewa81f4d3jr9qkky9qkpzvisjf4eauf68i6wef7ygp2gm786lxf3dtyhw5twf4h1pcaez2k77to8z9rqxvpqtsxqpeww9gydbt1kng7f97dk3uqy1x186u6vi2qtu528wzn3qnl76r0lhyk7xh7xikpeju8rreqauflgzrwaoj1oro0swjz23gaols2u2epmzu91mahykfymn8sv2rwt2yvntp3ir6me5a7vjk0hhy3jwap0a5abwec85n6svcaokljny513t6ijmmln6jt4nnb0fvhgyfxug1k6id299owigesudwk27eu7iy3scmt7z9x1a9kkptat4p9icfpkhqabce9g9n109mqo7iceabjt8tzzihttyc78x80w7sbjpdouf42u1ysopsbx7kf62g5oqndhwh2ti64b86isfphm59vb3c6tjs80sc04rfnl6cakmsjioqt4yi8ssdt6v4hvv7duvy96fsng10araf2gbjp1chtjnmvlv8e9pvys == \e\w\a\8\1\f\4\d\3\j\r\9\q\k\k\y\9\q\k\p\z\v\i\s\j\f\4\e\a\u\f\6\8\i\6\w\e\f\7\y\g\p\2\g\m\7\8\6\l\x\f\3\d\t\y\h\w\5\t\w\f\4\h\1\p\c\a\e\z\2\k\7\7\t\o\8\z\9\r\q\x\v\p\q\t\s\x\q\p\e\w\w\9\g\y\d\b\t\1\k\n\g\7\f\9\7\d\k\3\u\q\y\1\x\1\8\6\u\6\v\i\2\q\t\u\5\2\8\w\z\n\3\q\n\l\7\6\r\0\l\h\y\k\7\x\h\7\x\i\k\p\e\j\u\8\r\r\e\q\a\u\f\l\g\z\r\w\a\o\j\1\o\r\o\0\s\w\j\z\2\3\g\a\o\l\s\2\u\2\e\p\m\z\u\9\1\m\a\h\y\k\f\y\m\n\8\s\v\2\r\w\t\2\y\v\n\t\p\3\i\r\6\m\e\5\a\7\v\j\k\0\h\h\y\3\j\w\a\p\0\a\5\a\b\w\e\c\8\5\n\6\s\v\c\a\o\k\l\j\n\y\5\1\3\t\6\i\j\m\m\l\n\6\j\t\4\n\n\b\0\f\v\h\g\y\f\x\u\g\1\k\6\i\d\2\9\9\o\w\i\g\e\s\u\d\w\k\2\7\e\u\7\i\y\3\s\c\m\t\7\z\9\x\1\a\9\k\k\p\t\a\t\4\p\9\i\c\f\p\k\h\q\a\b\c\e\9\g\9\n\1\0\9\m\q\o\7\i\c\e\a\b\j\t\8\t\z\z\i\h\t\t\y\c\7\8\x\8\0\w\7\s\b\j\p\d\o\u\f\4\2\u\1\y\s\o\p\s\b\x\7\k\f\6\2\g\5\o\q\n\d\h\w\h\2\t\i\6\4\b\8\6\i\s\f\p\h\m\5\9\v\b\3\c\6\t\j\s\8\0\s\c\0\4\r\f\n\l\6\c\a\k\m\s\j\i\o\q\t\4\y\i\8\s\s\d\t\6\v\4\h\v\v\7\d\u\v\y\9\6\f\s\n\g\1\0\a\r\a\f\2\g\b\j\p\1\c\h\t\j\n\m\v\l\v\8\e\9\p\v\y\s ]] 00:38:25.657 11:31:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:25.657 11:31:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:25.657 [2024-05-15 11:31:44.181848] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:25.657 [2024-05-15 11:31:44.182026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79095 ] 00:38:25.915 [2024-05-15 11:31:44.335504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.173 [2024-05-15 11:31:44.554265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.812  Copying: 512/512 [B] (average 125 kBps) 00:38:27.812 00:38:27.812 11:31:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ewa81f4d3jr9qkky9qkpzvisjf4eauf68i6wef7ygp2gm786lxf3dtyhw5twf4h1pcaez2k77to8z9rqxvpqtsxqpeww9gydbt1kng7f97dk3uqy1x186u6vi2qtu528wzn3qnl76r0lhyk7xh7xikpeju8rreqauflgzrwaoj1oro0swjz23gaols2u2epmzu91mahykfymn8sv2rwt2yvntp3ir6me5a7vjk0hhy3jwap0a5abwec85n6svcaokljny513t6ijmmln6jt4nnb0fvhgyfxug1k6id299owigesudwk27eu7iy3scmt7z9x1a9kkptat4p9icfpkhqabce9g9n109mqo7iceabjt8tzzihttyc78x80w7sbjpdouf42u1ysopsbx7kf62g5oqndhwh2ti64b86isfphm59vb3c6tjs80sc04rfnl6cakmsjioqt4yi8ssdt6v4hvv7duvy96fsng10araf2gbjp1chtjnmvlv8e9pvys == \e\w\a\8\1\f\4\d\3\j\r\9\q\k\k\y\9\q\k\p\z\v\i\s\j\f\4\e\a\u\f\6\8\i\6\w\e\f\7\y\g\p\2\g\m\7\8\6\l\x\f\3\d\t\y\h\w\5\t\w\f\4\h\1\p\c\a\e\z\2\k\7\7\t\o\8\z\9\r\q\x\v\p\q\t\s\x\q\p\e\w\w\9\g\y\d\b\t\1\k\n\g\7\f\9\7\d\k\3\u\q\y\1\x\1\8\6\u\6\v\i\2\q\t\u\5\2\8\w\z\n\3\q\n\l\7\6\r\0\l\h\y\k\7\x\h\7\x\i\k\p\e\j\u\8\r\r\e\q\a\u\f\l\g\z\r\w\a\o\j\1\o\r\o\0\s\w\j\z\2\3\g\a\o\l\s\2\u\2\e\p\m\z\u\9\1\m\a\h\y\k\f\y\m\n\8\s\v\2\r\w\t\2\y\v\n\t\p\3\i\r\6\m\e\5\a\7\v\j\k\0\h\h\y\3\j\w\a\p\0\a\5\a\b\w\e\c\8\5\n\6\s\v\c\a\o\k\l\j\n\y\5\1\3\t\6\i\j\m\m\l\n\6\j\t\4\n\n\b\0\f\v\h\g\y\f\x\u\g\1\k\6\i\d\2\9\9\o\w\i\g\e\s\u\d\w\k\2\7\e\u\7\i\y\3\s\c\m\t\7\z\9\x\1\a\9\k\k\p\t\a\t\4\p\9\i\c\f\p\k\h\q\a\b\c\e\9\g\9\n\1\0\9\m\q\o\7\i\c\e\a\b\j\t\8\t\z\z\i\h\t\t\y\c\7\8\x\8\0\w\7\s\b\j\p\d\o\u\f\4\2\u\1\y\s\o\p\s\b\x\7\k\f\6\2\g\5\o\q\n\d\h\w\h\2\t\i\6\4\b\8\6\i\s\f\p\h\m\5\9\v\b\3\c\6\t\j\s\8\0\s\c\0\4\r\f\n\l\6\c\a\k\m\s\j\i\o\q\t\4\y\i\8\s\s\d\t\6\v\4\h\v\v\7\d\u\v\y\9\6\f\s\n\g\1\0\a\r\a\f\2\g\b\j\p\1\c\h\t\j\n\m\v\l\v\8\e\9\p\v\y\s ]] 00:38:27.812 11:31:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:27.812 11:31:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:27.812 [2024-05-15 11:31:46.344854] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:27.812 [2024-05-15 11:31:46.345047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79124 ] 00:38:28.070 [2024-05-15 11:31:46.497233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.328 [2024-05-15 11:31:46.736552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.964  Copying: 512/512 [B] (average 250 kBps) 00:38:29.964 00:38:29.964 ************************************ 00:38:29.964 END TEST dd_flags_misc 00:38:29.964 ************************************ 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ewa81f4d3jr9qkky9qkpzvisjf4eauf68i6wef7ygp2gm786lxf3dtyhw5twf4h1pcaez2k77to8z9rqxvpqtsxqpeww9gydbt1kng7f97dk3uqy1x186u6vi2qtu528wzn3qnl76r0lhyk7xh7xikpeju8rreqauflgzrwaoj1oro0swjz23gaols2u2epmzu91mahykfymn8sv2rwt2yvntp3ir6me5a7vjk0hhy3jwap0a5abwec85n6svcaokljny513t6ijmmln6jt4nnb0fvhgyfxug1k6id299owigesudwk27eu7iy3scmt7z9x1a9kkptat4p9icfpkhqabce9g9n109mqo7iceabjt8tzzihttyc78x80w7sbjpdouf42u1ysopsbx7kf62g5oqndhwh2ti64b86isfphm59vb3c6tjs80sc04rfnl6cakmsjioqt4yi8ssdt6v4hvv7duvy96fsng10araf2gbjp1chtjnmvlv8e9pvys == \e\w\a\8\1\f\4\d\3\j\r\9\q\k\k\y\9\q\k\p\z\v\i\s\j\f\4\e\a\u\f\6\8\i\6\w\e\f\7\y\g\p\2\g\m\7\8\6\l\x\f\3\d\t\y\h\w\5\t\w\f\4\h\1\p\c\a\e\z\2\k\7\7\t\o\8\z\9\r\q\x\v\p\q\t\s\x\q\p\e\w\w\9\g\y\d\b\t\1\k\n\g\7\f\9\7\d\k\3\u\q\y\1\x\1\8\6\u\6\v\i\2\q\t\u\5\2\8\w\z\n\3\q\n\l\7\6\r\0\l\h\y\k\7\x\h\7\x\i\k\p\e\j\u\8\r\r\e\q\a\u\f\l\g\z\r\w\a\o\j\1\o\r\o\0\s\w\j\z\2\3\g\a\o\l\s\2\u\2\e\p\m\z\u\9\1\m\a\h\y\k\f\y\m\n\8\s\v\2\r\w\t\2\y\v\n\t\p\3\i\r\6\m\e\5\a\7\v\j\k\0\h\h\y\3\j\w\a\p\0\a\5\a\b\w\e\c\8\5\n\6\s\v\c\a\o\k\l\j\n\y\5\1\3\t\6\i\j\m\m\l\n\6\j\t\4\n\n\b\0\f\v\h\g\y\f\x\u\g\1\k\6\i\d\2\9\9\o\w\i\g\e\s\u\d\w\k\2\7\e\u\7\i\y\3\s\c\m\t\7\z\9\x\1\a\9\k\k\p\t\a\t\4\p\9\i\c\f\p\k\h\q\a\b\c\e\9\g\9\n\1\0\9\m\q\o\7\i\c\e\a\b\j\t\8\t\z\z\i\h\t\t\y\c\7\8\x\8\0\w\7\s\b\j\p\d\o\u\f\4\2\u\1\y\s\o\p\s\b\x\7\k\f\6\2\g\5\o\q\n\d\h\w\h\2\t\i\6\4\b\8\6\i\s\f\p\h\m\5\9\v\b\3\c\6\t\j\s\8\0\s\c\0\4\r\f\n\l\6\c\a\k\m\s\j\i\o\q\t\4\y\i\8\s\s\d\t\6\v\4\h\v\v\7\d\u\v\y\9\6\f\s\n\g\1\0\a\r\a\f\2\g\b\j\p\1\c\h\t\j\n\m\v\l\v\8\e\9\p\v\y\s ]] 00:38:29.964 00:38:29.964 real 0m17.828s 00:38:29.964 user 0m14.097s 00:38:29.964 sys 0m2.106s 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:38:29.964 * Second test run, using AIO 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:29.964 ************************************ 00:38:29.964 START TEST dd_flag_append_forced_aio 00:38:29.964 ************************************ 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=ktcglyr7jaby6vp488rtn7rxjkkhuf26 00:38:29.964 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=dsfwfcxkvu7rv0mgxjn246cbm0g7q1by 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s ktcglyr7jaby6vp488rtn7rxjkkhuf26 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s dsfwfcxkvu7rv0mgxjn246cbm0g7q1by 00:38:29.965 11:31:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:29.965 [2024-05-15 11:31:48.574834] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:29.965 [2024-05-15 11:31:48.575081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79175 ] 00:38:30.225 [2024-05-15 11:31:48.745567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.483 [2024-05-15 11:31:48.967949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.169  Copying: 32/32 [B] (average 31 kBps) 00:38:32.169 00:38:32.169 ************************************ 00:38:32.169 END TEST dd_flag_append_forced_aio 00:38:32.169 ************************************ 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ dsfwfcxkvu7rv0mgxjn246cbm0g7q1byktcglyr7jaby6vp488rtn7rxjkkhuf26 == \d\s\f\w\f\c\x\k\v\u\7\r\v\0\m\g\x\j\n\2\4\6\c\b\m\0\g\7\q\1\b\y\k\t\c\g\l\y\r\7\j\a\b\y\6\v\p\4\8\8\r\t\n\7\r\x\j\k\k\h\u\f\2\6 ]] 00:38:32.169 00:38:32.169 real 0m2.133s 00:38:32.169 user 0m1.699s 00:38:32.169 sys 0m0.235s 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:32.169 ************************************ 00:38:32.169 START TEST dd_flag_directory_forced_aio 00:38:32.169 ************************************ 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:32.169 11:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:32.169 [2024-05-15 11:31:50.737165] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:32.169 [2024-05-15 11:31:50.737343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79226 ] 00:38:32.427 [2024-05-15 11:31:50.889700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.685 [2024-05-15 11:31:51.107283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.943 [2024-05-15 11:31:51.457164] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:32.943 [2024-05-15 11:31:51.457256] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:32.943 [2024-05-15 11:31:51.457291] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:33.877 [2024-05-15 11:31:52.297572] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:34.136 11:31:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:34.394 [2024-05-15 11:31:52.824139] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:34.394 [2024-05-15 11:31:52.824320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79258 ] 00:38:34.394 [2024-05-15 11:31:52.976565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.652 [2024-05-15 11:31:53.195093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.218 [2024-05-15 11:31:53.548086] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:35.218 [2024-05-15 11:31:53.548189] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:35.218 [2024-05-15 11:31:53.548227] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:35.801 [2024-05-15 11:31:54.387451] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:36.367 ************************************ 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:36.367 00:38:36.367 real 0m4.169s 00:38:36.367 user 0m3.325s 00:38:36.367 sys 0m0.445s 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:36.367 END TEST dd_flag_directory_forced_aio 00:38:36.367 ************************************ 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:36.367 ************************************ 00:38:36.367 START TEST dd_flag_nofollow_forced_aio 00:38:36.367 ************************************ 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:36.367 11:31:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:36.367 [2024-05-15 11:31:54.952460] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:36.367 [2024-05-15 11:31:54.952642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79304 ] 00:38:36.625 [2024-05-15 11:31:55.113126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.883 [2024-05-15 11:31:55.329431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.142 [2024-05-15 11:31:55.679746] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:37.142 [2024-05-15 11:31:55.680092] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:37.142 [2024-05-15 11:31:55.680139] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:38.077 [2024-05-15 11:31:56.513164] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:38.335 11:31:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:38.593 [2024-05-15 11:31:57.048746] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:38.593 [2024-05-15 11:31:57.048964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79337 ] 00:38:38.593 [2024-05-15 11:31:57.202079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.851 [2024-05-15 11:31:57.417330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.418 [2024-05-15 11:31:57.765372] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:39.418 [2024-05-15 11:31:57.765456] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:39.418 [2024-05-15 11:31:57.765494] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:39.985 [2024-05-15 11:31:58.600103] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:40.552 11:31:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:40.552 [2024-05-15 11:31:59.126327] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:40.552 [2024-05-15 11:31:59.126510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79366 ] 00:38:40.811 [2024-05-15 11:31:59.290508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.070 [2024-05-15 11:31:59.506321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.702  Copying: 512/512 [B] (average 500 kBps) 00:38:42.702 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ f4fk6waoxbpvjjnweshhf8gde4gd4zvs8g5rt3yrrpi6o0mdfos3rp2zlroiqfbrds8rwgxxlvk8d3p3ul8356vgtiuj5zo7pj8fz0xpnw6adcwyalzbg45bc38gokjam9ra7hcevrb16adqquoaqjkzy7gs5ll3z16fgq4xkc4a9lcfdxumtznwvoca1cqhqlrid6ldy8wwmuczvtufut3e1t8uhq2rdq2odvk6xa6twokenpi2gu1noyl8vezm3m8afkvc1wd1adt4ztihnqc04uia8yw9kqx12gaoz3vvkbm1iuqmxuebwg4jrjmexqkr5mn73sf94fl2aboht92eswprjd6nlkur4hld1xxey62lfv3r5n4r8k95gpaurbnln9yuwza9xd6zthrdne4ih9d0dt8zgtuj2vdlt8qby7rovhhgdutgnu7wyz019snlglf0490j4jmcw3ts6oo91p4qfgdgnpg64yqq25v2o0tab7oz60r2iap0n78m == \f\4\f\k\6\w\a\o\x\b\p\v\j\j\n\w\e\s\h\h\f\8\g\d\e\4\g\d\4\z\v\s\8\g\5\r\t\3\y\r\r\p\i\6\o\0\m\d\f\o\s\3\r\p\2\z\l\r\o\i\q\f\b\r\d\s\8\r\w\g\x\x\l\v\k\8\d\3\p\3\u\l\8\3\5\6\v\g\t\i\u\j\5\z\o\7\p\j\8\f\z\0\x\p\n\w\6\a\d\c\w\y\a\l\z\b\g\4\5\b\c\3\8\g\o\k\j\a\m\9\r\a\7\h\c\e\v\r\b\1\6\a\d\q\q\u\o\a\q\j\k\z\y\7\g\s\5\l\l\3\z\1\6\f\g\q\4\x\k\c\4\a\9\l\c\f\d\x\u\m\t\z\n\w\v\o\c\a\1\c\q\h\q\l\r\i\d\6\l\d\y\8\w\w\m\u\c\z\v\t\u\f\u\t\3\e\1\t\8\u\h\q\2\r\d\q\2\o\d\v\k\6\x\a\6\t\w\o\k\e\n\p\i\2\g\u\1\n\o\y\l\8\v\e\z\m\3\m\8\a\f\k\v\c\1\w\d\1\a\d\t\4\z\t\i\h\n\q\c\0\4\u\i\a\8\y\w\9\k\q\x\1\2\g\a\o\z\3\v\v\k\b\m\1\i\u\q\m\x\u\e\b\w\g\4\j\r\j\m\e\x\q\k\r\5\m\n\7\3\s\f\9\4\f\l\2\a\b\o\h\t\9\2\e\s\w\p\r\j\d\6\n\l\k\u\r\4\h\l\d\1\x\x\e\y\6\2\l\f\v\3\r\5\n\4\r\8\k\9\5\g\p\a\u\r\b\n\l\n\9\y\u\w\z\a\9\x\d\6\z\t\h\r\d\n\e\4\i\h\9\d\0\d\t\8\z\g\t\u\j\2\v\d\l\t\8\q\b\y\7\r\o\v\h\h\g\d\u\t\g\n\u\7\w\y\z\0\1\9\s\n\l\g\l\f\0\4\9\0\j\4\j\m\c\w\3\t\s\6\o\o\9\1\p\4\q\f\g\d\g\n\p\g\6\4\y\q\q\2\5\v\2\o\0\t\a\b\7\o\z\6\0\r\2\i\a\p\0\n\7\8\m ]] 00:38:42.702 ************************************ 00:38:42.702 END TEST dd_flag_nofollow_forced_aio 00:38:42.702 ************************************ 00:38:42.702 00:38:42.702 real 0m6.276s 00:38:42.702 user 0m4.990s 00:38:42.702 sys 0m0.689s 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:42.702 ************************************ 00:38:42.702 START TEST dd_flag_noatime_forced_aio 00:38:42.702 ************************************ 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715772719 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715772721 00:38:42.702 11:32:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:38:43.665 11:32:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:43.665 [2024-05-15 11:32:02.290954] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:43.665 [2024-05-15 11:32:02.291168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79439 ] 00:38:43.922 [2024-05-15 11:32:02.452889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.181 [2024-05-15 11:32:02.684939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.807  Copying: 512/512 [B] (average 500 kBps) 00:38:45.807 00:38:45.807 11:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:45.807 11:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715772719 )) 00:38:45.807 11:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:45.807 11:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715772721 )) 00:38:45.807 11:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:45.807 [2024-05-15 11:32:04.402564] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:45.807 [2024-05-15 11:32:04.402742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79465 ] 00:38:46.065 [2024-05-15 11:32:04.565884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.352 [2024-05-15 11:32:04.842202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.986  Copying: 512/512 [B] (average 500 kBps) 00:38:47.986 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715772725 )) 00:38:47.986 00:38:47.986 real 0m5.286s 00:38:47.986 user 0m3.396s 00:38:47.986 sys 0m0.482s 00:38:47.986 ************************************ 00:38:47.986 END TEST dd_flag_noatime_forced_aio 00:38:47.986 ************************************ 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:47.986 ************************************ 00:38:47.986 START TEST dd_flags_misc_forced_aio 00:38:47.986 ************************************ 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:47.986 11:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:47.986 [2024-05-15 11:32:06.607908] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:47.986 [2024-05-15 11:32:06.608127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79513 ] 00:38:48.245 [2024-05-15 11:32:06.757995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.503 [2024-05-15 11:32:06.967164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.136  Copying: 512/512 [B] (average 500 kBps) 00:38:50.136 00:38:50.136 11:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ b12owleteus59cumuibx5x6rw050potk27px510w7z7gg3fkdxfwzlmgbh9w78sqlk1m3zgpczujd5getqjqhwdewk635fab06xwgtj3ohrr4nk5z65ocllj1nrkea5w8xgehp27yaxp55pd7ptne2kvehru20kicxn2m31zlpusb24p2z5wdav9ztrza0rj7w7243240eboc2wtz3tt9r24p1uphpkzj2vo8zhsechfsld0hqk8we4pcp90j5o2uhcbsnejxvzw2owi45yzjl8wyj0pstj5qujuw3ig8man50rvis2o78pbmb87z9u70vun3a4wqoqy387xm6xwbo6p1r1jrbo45637y8lic72syqd5tkuh8e9h499j3r46uejs5t8vovfbdjr5fqcpxh93w7y2abnmwxi82ktfwwp1mnnuert8t4ed0d383xrajwzvsr0c3ok50i0m07uou030nrgwq6wne9ispb46w9f7ac6r82ob7v1pjnnj59ct == \b\1\2\o\w\l\e\t\e\u\s\5\9\c\u\m\u\i\b\x\5\x\6\r\w\0\5\0\p\o\t\k\2\7\p\x\5\1\0\w\7\z\7\g\g\3\f\k\d\x\f\w\z\l\m\g\b\h\9\w\7\8\s\q\l\k\1\m\3\z\g\p\c\z\u\j\d\5\g\e\t\q\j\q\h\w\d\e\w\k\6\3\5\f\a\b\0\6\x\w\g\t\j\3\o\h\r\r\4\n\k\5\z\6\5\o\c\l\l\j\1\n\r\k\e\a\5\w\8\x\g\e\h\p\2\7\y\a\x\p\5\5\p\d\7\p\t\n\e\2\k\v\e\h\r\u\2\0\k\i\c\x\n\2\m\3\1\z\l\p\u\s\b\2\4\p\2\z\5\w\d\a\v\9\z\t\r\z\a\0\r\j\7\w\7\2\4\3\2\4\0\e\b\o\c\2\w\t\z\3\t\t\9\r\2\4\p\1\u\p\h\p\k\z\j\2\v\o\8\z\h\s\e\c\h\f\s\l\d\0\h\q\k\8\w\e\4\p\c\p\9\0\j\5\o\2\u\h\c\b\s\n\e\j\x\v\z\w\2\o\w\i\4\5\y\z\j\l\8\w\y\j\0\p\s\t\j\5\q\u\j\u\w\3\i\g\8\m\a\n\5\0\r\v\i\s\2\o\7\8\p\b\m\b\8\7\z\9\u\7\0\v\u\n\3\a\4\w\q\o\q\y\3\8\7\x\m\6\x\w\b\o\6\p\1\r\1\j\r\b\o\4\5\6\3\7\y\8\l\i\c\7\2\s\y\q\d\5\t\k\u\h\8\e\9\h\4\9\9\j\3\r\4\6\u\e\j\s\5\t\8\v\o\v\f\b\d\j\r\5\f\q\c\p\x\h\9\3\w\7\y\2\a\b\n\m\w\x\i\8\2\k\t\f\w\w\p\1\m\n\n\u\e\r\t\8\t\4\e\d\0\d\3\8\3\x\r\a\j\w\z\v\s\r\0\c\3\o\k\5\0\i\0\m\0\7\u\o\u\0\3\0\n\r\g\w\q\6\w\n\e\9\i\s\p\b\4\6\w\9\f\7\a\c\6\r\8\2\o\b\7\v\1\p\j\n\n\j\5\9\c\t ]] 00:38:50.136 11:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:50.136 11:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:50.136 [2024-05-15 11:32:08.630630] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:50.136 [2024-05-15 11:32:08.630971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79545 ] 00:38:50.394 [2024-05-15 11:32:08.789210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.652 [2024-05-15 11:32:09.049722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.284  Copying: 512/512 [B] (average 500 kBps) 00:38:52.284 00:38:52.284 11:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ b12owleteus59cumuibx5x6rw050potk27px510w7z7gg3fkdxfwzlmgbh9w78sqlk1m3zgpczujd5getqjqhwdewk635fab06xwgtj3ohrr4nk5z65ocllj1nrkea5w8xgehp27yaxp55pd7ptne2kvehru20kicxn2m31zlpusb24p2z5wdav9ztrza0rj7w7243240eboc2wtz3tt9r24p1uphpkzj2vo8zhsechfsld0hqk8we4pcp90j5o2uhcbsnejxvzw2owi45yzjl8wyj0pstj5qujuw3ig8man50rvis2o78pbmb87z9u70vun3a4wqoqy387xm6xwbo6p1r1jrbo45637y8lic72syqd5tkuh8e9h499j3r46uejs5t8vovfbdjr5fqcpxh93w7y2abnmwxi82ktfwwp1mnnuert8t4ed0d383xrajwzvsr0c3ok50i0m07uou030nrgwq6wne9ispb46w9f7ac6r82ob7v1pjnnj59ct == \b\1\2\o\w\l\e\t\e\u\s\5\9\c\u\m\u\i\b\x\5\x\6\r\w\0\5\0\p\o\t\k\2\7\p\x\5\1\0\w\7\z\7\g\g\3\f\k\d\x\f\w\z\l\m\g\b\h\9\w\7\8\s\q\l\k\1\m\3\z\g\p\c\z\u\j\d\5\g\e\t\q\j\q\h\w\d\e\w\k\6\3\5\f\a\b\0\6\x\w\g\t\j\3\o\h\r\r\4\n\k\5\z\6\5\o\c\l\l\j\1\n\r\k\e\a\5\w\8\x\g\e\h\p\2\7\y\a\x\p\5\5\p\d\7\p\t\n\e\2\k\v\e\h\r\u\2\0\k\i\c\x\n\2\m\3\1\z\l\p\u\s\b\2\4\p\2\z\5\w\d\a\v\9\z\t\r\z\a\0\r\j\7\w\7\2\4\3\2\4\0\e\b\o\c\2\w\t\z\3\t\t\9\r\2\4\p\1\u\p\h\p\k\z\j\2\v\o\8\z\h\s\e\c\h\f\s\l\d\0\h\q\k\8\w\e\4\p\c\p\9\0\j\5\o\2\u\h\c\b\s\n\e\j\x\v\z\w\2\o\w\i\4\5\y\z\j\l\8\w\y\j\0\p\s\t\j\5\q\u\j\u\w\3\i\g\8\m\a\n\5\0\r\v\i\s\2\o\7\8\p\b\m\b\8\7\z\9\u\7\0\v\u\n\3\a\4\w\q\o\q\y\3\8\7\x\m\6\x\w\b\o\6\p\1\r\1\j\r\b\o\4\5\6\3\7\y\8\l\i\c\7\2\s\y\q\d\5\t\k\u\h\8\e\9\h\4\9\9\j\3\r\4\6\u\e\j\s\5\t\8\v\o\v\f\b\d\j\r\5\f\q\c\p\x\h\9\3\w\7\y\2\a\b\n\m\w\x\i\8\2\k\t\f\w\w\p\1\m\n\n\u\e\r\t\8\t\4\e\d\0\d\3\8\3\x\r\a\j\w\z\v\s\r\0\c\3\o\k\5\0\i\0\m\0\7\u\o\u\0\3\0\n\r\g\w\q\6\w\n\e\9\i\s\p\b\4\6\w\9\f\7\a\c\6\r\8\2\o\b\7\v\1\p\j\n\n\j\5\9\c\t ]] 00:38:52.284 11:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:52.284 11:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:52.284 [2024-05-15 11:32:10.778081] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:52.284 [2024-05-15 11:32:10.778292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79570 ] 00:38:52.542 [2024-05-15 11:32:10.930117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.542 [2024-05-15 11:32:11.140309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.057  Copying: 512/512 [B] (average 166 kBps) 00:38:54.057 00:38:54.316 11:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ b12owleteus59cumuibx5x6rw050potk27px510w7z7gg3fkdxfwzlmgbh9w78sqlk1m3zgpczujd5getqjqhwdewk635fab06xwgtj3ohrr4nk5z65ocllj1nrkea5w8xgehp27yaxp55pd7ptne2kvehru20kicxn2m31zlpusb24p2z5wdav9ztrza0rj7w7243240eboc2wtz3tt9r24p1uphpkzj2vo8zhsechfsld0hqk8we4pcp90j5o2uhcbsnejxvzw2owi45yzjl8wyj0pstj5qujuw3ig8man50rvis2o78pbmb87z9u70vun3a4wqoqy387xm6xwbo6p1r1jrbo45637y8lic72syqd5tkuh8e9h499j3r46uejs5t8vovfbdjr5fqcpxh93w7y2abnmwxi82ktfwwp1mnnuert8t4ed0d383xrajwzvsr0c3ok50i0m07uou030nrgwq6wne9ispb46w9f7ac6r82ob7v1pjnnj59ct == \b\1\2\o\w\l\e\t\e\u\s\5\9\c\u\m\u\i\b\x\5\x\6\r\w\0\5\0\p\o\t\k\2\7\p\x\5\1\0\w\7\z\7\g\g\3\f\k\d\x\f\w\z\l\m\g\b\h\9\w\7\8\s\q\l\k\1\m\3\z\g\p\c\z\u\j\d\5\g\e\t\q\j\q\h\w\d\e\w\k\6\3\5\f\a\b\0\6\x\w\g\t\j\3\o\h\r\r\4\n\k\5\z\6\5\o\c\l\l\j\1\n\r\k\e\a\5\w\8\x\g\e\h\p\2\7\y\a\x\p\5\5\p\d\7\p\t\n\e\2\k\v\e\h\r\u\2\0\k\i\c\x\n\2\m\3\1\z\l\p\u\s\b\2\4\p\2\z\5\w\d\a\v\9\z\t\r\z\a\0\r\j\7\w\7\2\4\3\2\4\0\e\b\o\c\2\w\t\z\3\t\t\9\r\2\4\p\1\u\p\h\p\k\z\j\2\v\o\8\z\h\s\e\c\h\f\s\l\d\0\h\q\k\8\w\e\4\p\c\p\9\0\j\5\o\2\u\h\c\b\s\n\e\j\x\v\z\w\2\o\w\i\4\5\y\z\j\l\8\w\y\j\0\p\s\t\j\5\q\u\j\u\w\3\i\g\8\m\a\n\5\0\r\v\i\s\2\o\7\8\p\b\m\b\8\7\z\9\u\7\0\v\u\n\3\a\4\w\q\o\q\y\3\8\7\x\m\6\x\w\b\o\6\p\1\r\1\j\r\b\o\4\5\6\3\7\y\8\l\i\c\7\2\s\y\q\d\5\t\k\u\h\8\e\9\h\4\9\9\j\3\r\4\6\u\e\j\s\5\t\8\v\o\v\f\b\d\j\r\5\f\q\c\p\x\h\9\3\w\7\y\2\a\b\n\m\w\x\i\8\2\k\t\f\w\w\p\1\m\n\n\u\e\r\t\8\t\4\e\d\0\d\3\8\3\x\r\a\j\w\z\v\s\r\0\c\3\o\k\5\0\i\0\m\0\7\u\o\u\0\3\0\n\r\g\w\q\6\w\n\e\9\i\s\p\b\4\6\w\9\f\7\a\c\6\r\8\2\o\b\7\v\1\p\j\n\n\j\5\9\c\t ]] 00:38:54.316 11:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:54.316 11:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:54.316 [2024-05-15 11:32:12.843789] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:54.316 [2024-05-15 11:32:12.844013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79602 ] 00:38:54.575 [2024-05-15 11:32:12.993369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.575 [2024-05-15 11:32:13.205734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.530  Copying: 512/512 [B] (average 250 kBps) 00:38:56.530 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ b12owleteus59cumuibx5x6rw050potk27px510w7z7gg3fkdxfwzlmgbh9w78sqlk1m3zgpczujd5getqjqhwdewk635fab06xwgtj3ohrr4nk5z65ocllj1nrkea5w8xgehp27yaxp55pd7ptne2kvehru20kicxn2m31zlpusb24p2z5wdav9ztrza0rj7w7243240eboc2wtz3tt9r24p1uphpkzj2vo8zhsechfsld0hqk8we4pcp90j5o2uhcbsnejxvzw2owi45yzjl8wyj0pstj5qujuw3ig8man50rvis2o78pbmb87z9u70vun3a4wqoqy387xm6xwbo6p1r1jrbo45637y8lic72syqd5tkuh8e9h499j3r46uejs5t8vovfbdjr5fqcpxh93w7y2abnmwxi82ktfwwp1mnnuert8t4ed0d383xrajwzvsr0c3ok50i0m07uou030nrgwq6wne9ispb46w9f7ac6r82ob7v1pjnnj59ct == \b\1\2\o\w\l\e\t\e\u\s\5\9\c\u\m\u\i\b\x\5\x\6\r\w\0\5\0\p\o\t\k\2\7\p\x\5\1\0\w\7\z\7\g\g\3\f\k\d\x\f\w\z\l\m\g\b\h\9\w\7\8\s\q\l\k\1\m\3\z\g\p\c\z\u\j\d\5\g\e\t\q\j\q\h\w\d\e\w\k\6\3\5\f\a\b\0\6\x\w\g\t\j\3\o\h\r\r\4\n\k\5\z\6\5\o\c\l\l\j\1\n\r\k\e\a\5\w\8\x\g\e\h\p\2\7\y\a\x\p\5\5\p\d\7\p\t\n\e\2\k\v\e\h\r\u\2\0\k\i\c\x\n\2\m\3\1\z\l\p\u\s\b\2\4\p\2\z\5\w\d\a\v\9\z\t\r\z\a\0\r\j\7\w\7\2\4\3\2\4\0\e\b\o\c\2\w\t\z\3\t\t\9\r\2\4\p\1\u\p\h\p\k\z\j\2\v\o\8\z\h\s\e\c\h\f\s\l\d\0\h\q\k\8\w\e\4\p\c\p\9\0\j\5\o\2\u\h\c\b\s\n\e\j\x\v\z\w\2\o\w\i\4\5\y\z\j\l\8\w\y\j\0\p\s\t\j\5\q\u\j\u\w\3\i\g\8\m\a\n\5\0\r\v\i\s\2\o\7\8\p\b\m\b\8\7\z\9\u\7\0\v\u\n\3\a\4\w\q\o\q\y\3\8\7\x\m\6\x\w\b\o\6\p\1\r\1\j\r\b\o\4\5\6\3\7\y\8\l\i\c\7\2\s\y\q\d\5\t\k\u\h\8\e\9\h\4\9\9\j\3\r\4\6\u\e\j\s\5\t\8\v\o\v\f\b\d\j\r\5\f\q\c\p\x\h\9\3\w\7\y\2\a\b\n\m\w\x\i\8\2\k\t\f\w\w\p\1\m\n\n\u\e\r\t\8\t\4\e\d\0\d\3\8\3\x\r\a\j\w\z\v\s\r\0\c\3\o\k\5\0\i\0\m\0\7\u\o\u\0\3\0\n\r\g\w\q\6\w\n\e\9\i\s\p\b\4\6\w\9\f\7\a\c\6\r\8\2\o\b\7\v\1\p\j\n\n\j\5\9\c\t ]] 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:56.531 11:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:56.531 [2024-05-15 11:32:14.891417] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:56.531 [2024-05-15 11:32:14.891641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79627 ] 00:38:56.531 [2024-05-15 11:32:15.040366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.792 [2024-05-15 11:32:15.260460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.461  Copying: 512/512 [B] (average 500 kBps) 00:38:58.461 00:38:58.461 11:32:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2x50u2so3pqa4rjd2p688xy1ssecx98rmn9cuwilin0ljucc0n75l7h3m817wjqiykjb542y811y6gvfsmb8h56wlpdo70b8rspr0yd77riqyxofkri3i6tx9pjjlca7nxpi540szcgn356z2bvnu8xzmmpr0gi2c8zgyhw3dce3p2lw4srmq5mrljeqmjf7y5s72ag49v3fs0y9sno9vwnwtegc4um9ud1x01xibj3hai0b37ag10ctx9rp03n6fkl4be44h1cv5eic2py0e5uwcbilph3lpkobhlzvgc4io713iqav89hq9p2c8ojiog16hjzzaml4fr7fpzwys3a9bzttovu25yf2rn0ru9objirz3j4e18ytya086eto9l5kh9oc3xx7p53l0aor63g9qa37ok30yi8e18ibx4lbjn72uip43caz7ilaytbwy8jpslw4ud9bbjz30042jn4s49hlqak33ew2qib8hr9lcsem5enfwul27hw3epk2 == \2\x\5\0\u\2\s\o\3\p\q\a\4\r\j\d\2\p\6\8\8\x\y\1\s\s\e\c\x\9\8\r\m\n\9\c\u\w\i\l\i\n\0\l\j\u\c\c\0\n\7\5\l\7\h\3\m\8\1\7\w\j\q\i\y\k\j\b\5\4\2\y\8\1\1\y\6\g\v\f\s\m\b\8\h\5\6\w\l\p\d\o\7\0\b\8\r\s\p\r\0\y\d\7\7\r\i\q\y\x\o\f\k\r\i\3\i\6\t\x\9\p\j\j\l\c\a\7\n\x\p\i\5\4\0\s\z\c\g\n\3\5\6\z\2\b\v\n\u\8\x\z\m\m\p\r\0\g\i\2\c\8\z\g\y\h\w\3\d\c\e\3\p\2\l\w\4\s\r\m\q\5\m\r\l\j\e\q\m\j\f\7\y\5\s\7\2\a\g\4\9\v\3\f\s\0\y\9\s\n\o\9\v\w\n\w\t\e\g\c\4\u\m\9\u\d\1\x\0\1\x\i\b\j\3\h\a\i\0\b\3\7\a\g\1\0\c\t\x\9\r\p\0\3\n\6\f\k\l\4\b\e\4\4\h\1\c\v\5\e\i\c\2\p\y\0\e\5\u\w\c\b\i\l\p\h\3\l\p\k\o\b\h\l\z\v\g\c\4\i\o\7\1\3\i\q\a\v\8\9\h\q\9\p\2\c\8\o\j\i\o\g\1\6\h\j\z\z\a\m\l\4\f\r\7\f\p\z\w\y\s\3\a\9\b\z\t\t\o\v\u\2\5\y\f\2\r\n\0\r\u\9\o\b\j\i\r\z\3\j\4\e\1\8\y\t\y\a\0\8\6\e\t\o\9\l\5\k\h\9\o\c\3\x\x\7\p\5\3\l\0\a\o\r\6\3\g\9\q\a\3\7\o\k\3\0\y\i\8\e\1\8\i\b\x\4\l\b\j\n\7\2\u\i\p\4\3\c\a\z\7\i\l\a\y\t\b\w\y\8\j\p\s\l\w\4\u\d\9\b\b\j\z\3\0\0\4\2\j\n\4\s\4\9\h\l\q\a\k\3\3\e\w\2\q\i\b\8\h\r\9\l\c\s\e\m\5\e\n\f\w\u\l\2\7\h\w\3\e\p\k\2 ]] 00:38:58.461 11:32:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:58.461 11:32:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:58.461 [2024-05-15 11:32:17.021368] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:38:58.461 [2024-05-15 11:32:17.021568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79656 ] 00:38:58.719 [2024-05-15 11:32:17.191867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.978 [2024-05-15 11:32:17.421409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.614  Copying: 512/512 [B] (average 500 kBps) 00:39:00.614 00:39:00.614 11:32:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2x50u2so3pqa4rjd2p688xy1ssecx98rmn9cuwilin0ljucc0n75l7h3m817wjqiykjb542y811y6gvfsmb8h56wlpdo70b8rspr0yd77riqyxofkri3i6tx9pjjlca7nxpi540szcgn356z2bvnu8xzmmpr0gi2c8zgyhw3dce3p2lw4srmq5mrljeqmjf7y5s72ag49v3fs0y9sno9vwnwtegc4um9ud1x01xibj3hai0b37ag10ctx9rp03n6fkl4be44h1cv5eic2py0e5uwcbilph3lpkobhlzvgc4io713iqav89hq9p2c8ojiog16hjzzaml4fr7fpzwys3a9bzttovu25yf2rn0ru9objirz3j4e18ytya086eto9l5kh9oc3xx7p53l0aor63g9qa37ok30yi8e18ibx4lbjn72uip43caz7ilaytbwy8jpslw4ud9bbjz30042jn4s49hlqak33ew2qib8hr9lcsem5enfwul27hw3epk2 == \2\x\5\0\u\2\s\o\3\p\q\a\4\r\j\d\2\p\6\8\8\x\y\1\s\s\e\c\x\9\8\r\m\n\9\c\u\w\i\l\i\n\0\l\j\u\c\c\0\n\7\5\l\7\h\3\m\8\1\7\w\j\q\i\y\k\j\b\5\4\2\y\8\1\1\y\6\g\v\f\s\m\b\8\h\5\6\w\l\p\d\o\7\0\b\8\r\s\p\r\0\y\d\7\7\r\i\q\y\x\o\f\k\r\i\3\i\6\t\x\9\p\j\j\l\c\a\7\n\x\p\i\5\4\0\s\z\c\g\n\3\5\6\z\2\b\v\n\u\8\x\z\m\m\p\r\0\g\i\2\c\8\z\g\y\h\w\3\d\c\e\3\p\2\l\w\4\s\r\m\q\5\m\r\l\j\e\q\m\j\f\7\y\5\s\7\2\a\g\4\9\v\3\f\s\0\y\9\s\n\o\9\v\w\n\w\t\e\g\c\4\u\m\9\u\d\1\x\0\1\x\i\b\j\3\h\a\i\0\b\3\7\a\g\1\0\c\t\x\9\r\p\0\3\n\6\f\k\l\4\b\e\4\4\h\1\c\v\5\e\i\c\2\p\y\0\e\5\u\w\c\b\i\l\p\h\3\l\p\k\o\b\h\l\z\v\g\c\4\i\o\7\1\3\i\q\a\v\8\9\h\q\9\p\2\c\8\o\j\i\o\g\1\6\h\j\z\z\a\m\l\4\f\r\7\f\p\z\w\y\s\3\a\9\b\z\t\t\o\v\u\2\5\y\f\2\r\n\0\r\u\9\o\b\j\i\r\z\3\j\4\e\1\8\y\t\y\a\0\8\6\e\t\o\9\l\5\k\h\9\o\c\3\x\x\7\p\5\3\l\0\a\o\r\6\3\g\9\q\a\3\7\o\k\3\0\y\i\8\e\1\8\i\b\x\4\l\b\j\n\7\2\u\i\p\4\3\c\a\z\7\i\l\a\y\t\b\w\y\8\j\p\s\l\w\4\u\d\9\b\b\j\z\3\0\0\4\2\j\n\4\s\4\9\h\l\q\a\k\3\3\e\w\2\q\i\b\8\h\r\9\l\c\s\e\m\5\e\n\f\w\u\l\2\7\h\w\3\e\p\k\2 ]] 00:39:00.614 11:32:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:00.614 11:32:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:39:00.614 [2024-05-15 11:32:19.178093] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:00.614 [2024-05-15 11:32:19.178267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79680 ] 00:39:00.871 [2024-05-15 11:32:19.341078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.128 [2024-05-15 11:32:19.560226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.808  Copying: 512/512 [B] (average 125 kBps) 00:39:02.808 00:39:02.808 11:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2x50u2so3pqa4rjd2p688xy1ssecx98rmn9cuwilin0ljucc0n75l7h3m817wjqiykjb542y811y6gvfsmb8h56wlpdo70b8rspr0yd77riqyxofkri3i6tx9pjjlca7nxpi540szcgn356z2bvnu8xzmmpr0gi2c8zgyhw3dce3p2lw4srmq5mrljeqmjf7y5s72ag49v3fs0y9sno9vwnwtegc4um9ud1x01xibj3hai0b37ag10ctx9rp03n6fkl4be44h1cv5eic2py0e5uwcbilph3lpkobhlzvgc4io713iqav89hq9p2c8ojiog16hjzzaml4fr7fpzwys3a9bzttovu25yf2rn0ru9objirz3j4e18ytya086eto9l5kh9oc3xx7p53l0aor63g9qa37ok30yi8e18ibx4lbjn72uip43caz7ilaytbwy8jpslw4ud9bbjz30042jn4s49hlqak33ew2qib8hr9lcsem5enfwul27hw3epk2 == \2\x\5\0\u\2\s\o\3\p\q\a\4\r\j\d\2\p\6\8\8\x\y\1\s\s\e\c\x\9\8\r\m\n\9\c\u\w\i\l\i\n\0\l\j\u\c\c\0\n\7\5\l\7\h\3\m\8\1\7\w\j\q\i\y\k\j\b\5\4\2\y\8\1\1\y\6\g\v\f\s\m\b\8\h\5\6\w\l\p\d\o\7\0\b\8\r\s\p\r\0\y\d\7\7\r\i\q\y\x\o\f\k\r\i\3\i\6\t\x\9\p\j\j\l\c\a\7\n\x\p\i\5\4\0\s\z\c\g\n\3\5\6\z\2\b\v\n\u\8\x\z\m\m\p\r\0\g\i\2\c\8\z\g\y\h\w\3\d\c\e\3\p\2\l\w\4\s\r\m\q\5\m\r\l\j\e\q\m\j\f\7\y\5\s\7\2\a\g\4\9\v\3\f\s\0\y\9\s\n\o\9\v\w\n\w\t\e\g\c\4\u\m\9\u\d\1\x\0\1\x\i\b\j\3\h\a\i\0\b\3\7\a\g\1\0\c\t\x\9\r\p\0\3\n\6\f\k\l\4\b\e\4\4\h\1\c\v\5\e\i\c\2\p\y\0\e\5\u\w\c\b\i\l\p\h\3\l\p\k\o\b\h\l\z\v\g\c\4\i\o\7\1\3\i\q\a\v\8\9\h\q\9\p\2\c\8\o\j\i\o\g\1\6\h\j\z\z\a\m\l\4\f\r\7\f\p\z\w\y\s\3\a\9\b\z\t\t\o\v\u\2\5\y\f\2\r\n\0\r\u\9\o\b\j\i\r\z\3\j\4\e\1\8\y\t\y\a\0\8\6\e\t\o\9\l\5\k\h\9\o\c\3\x\x\7\p\5\3\l\0\a\o\r\6\3\g\9\q\a\3\7\o\k\3\0\y\i\8\e\1\8\i\b\x\4\l\b\j\n\7\2\u\i\p\4\3\c\a\z\7\i\l\a\y\t\b\w\y\8\j\p\s\l\w\4\u\d\9\b\b\j\z\3\0\0\4\2\j\n\4\s\4\9\h\l\q\a\k\3\3\e\w\2\q\i\b\8\h\r\9\l\c\s\e\m\5\e\n\f\w\u\l\2\7\h\w\3\e\p\k\2 ]] 00:39:02.808 11:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:02.808 11:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:39:02.808 [2024-05-15 11:32:21.310371] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:02.808 [2024-05-15 11:32:21.310542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79711 ] 00:39:03.067 [2024-05-15 11:32:21.465792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.067 [2024-05-15 11:32:21.692250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.010  Copying: 512/512 [B] (average 166 kBps) 00:39:05.010 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2x50u2so3pqa4rjd2p688xy1ssecx98rmn9cuwilin0ljucc0n75l7h3m817wjqiykjb542y811y6gvfsmb8h56wlpdo70b8rspr0yd77riqyxofkri3i6tx9pjjlca7nxpi540szcgn356z2bvnu8xzmmpr0gi2c8zgyhw3dce3p2lw4srmq5mrljeqmjf7y5s72ag49v3fs0y9sno9vwnwtegc4um9ud1x01xibj3hai0b37ag10ctx9rp03n6fkl4be44h1cv5eic2py0e5uwcbilph3lpkobhlzvgc4io713iqav89hq9p2c8ojiog16hjzzaml4fr7fpzwys3a9bzttovu25yf2rn0ru9objirz3j4e18ytya086eto9l5kh9oc3xx7p53l0aor63g9qa37ok30yi8e18ibx4lbjn72uip43caz7ilaytbwy8jpslw4ud9bbjz30042jn4s49hlqak33ew2qib8hr9lcsem5enfwul27hw3epk2 == \2\x\5\0\u\2\s\o\3\p\q\a\4\r\j\d\2\p\6\8\8\x\y\1\s\s\e\c\x\9\8\r\m\n\9\c\u\w\i\l\i\n\0\l\j\u\c\c\0\n\7\5\l\7\h\3\m\8\1\7\w\j\q\i\y\k\j\b\5\4\2\y\8\1\1\y\6\g\v\f\s\m\b\8\h\5\6\w\l\p\d\o\7\0\b\8\r\s\p\r\0\y\d\7\7\r\i\q\y\x\o\f\k\r\i\3\i\6\t\x\9\p\j\j\l\c\a\7\n\x\p\i\5\4\0\s\z\c\g\n\3\5\6\z\2\b\v\n\u\8\x\z\m\m\p\r\0\g\i\2\c\8\z\g\y\h\w\3\d\c\e\3\p\2\l\w\4\s\r\m\q\5\m\r\l\j\e\q\m\j\f\7\y\5\s\7\2\a\g\4\9\v\3\f\s\0\y\9\s\n\o\9\v\w\n\w\t\e\g\c\4\u\m\9\u\d\1\x\0\1\x\i\b\j\3\h\a\i\0\b\3\7\a\g\1\0\c\t\x\9\r\p\0\3\n\6\f\k\l\4\b\e\4\4\h\1\c\v\5\e\i\c\2\p\y\0\e\5\u\w\c\b\i\l\p\h\3\l\p\k\o\b\h\l\z\v\g\c\4\i\o\7\1\3\i\q\a\v\8\9\h\q\9\p\2\c\8\o\j\i\o\g\1\6\h\j\z\z\a\m\l\4\f\r\7\f\p\z\w\y\s\3\a\9\b\z\t\t\o\v\u\2\5\y\f\2\r\n\0\r\u\9\o\b\j\i\r\z\3\j\4\e\1\8\y\t\y\a\0\8\6\e\t\o\9\l\5\k\h\9\o\c\3\x\x\7\p\5\3\l\0\a\o\r\6\3\g\9\q\a\3\7\o\k\3\0\y\i\8\e\1\8\i\b\x\4\l\b\j\n\7\2\u\i\p\4\3\c\a\z\7\i\l\a\y\t\b\w\y\8\j\p\s\l\w\4\u\d\9\b\b\j\z\3\0\0\4\2\j\n\4\s\4\9\h\l\q\a\k\3\3\e\w\2\q\i\b\8\h\r\9\l\c\s\e\m\5\e\n\f\w\u\l\2\7\h\w\3\e\p\k\2 ]] 00:39:05.010 00:39:05.010 real 0m16.817s 00:39:05.010 user 0m13.293s 00:39:05.010 sys 0m1.895s 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:05.010 ************************************ 00:39:05.010 END TEST dd_flags_misc_forced_aio 00:39:05.010 ************************************ 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:39:05.010 ************************************ 00:39:05.010 END TEST spdk_dd_posix 00:39:05.010 ************************************ 00:39:05.010 00:39:05.010 real 1m11.126s 00:39:05.010 user 0m54.602s 00:39:05.010 sys 0m8.023s 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:05.010 11:32:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:39:05.010 11:32:23 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:39:05.010 11:32:23 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:05.010 11:32:23 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:05.010 11:32:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:05.010 ************************************ 00:39:05.010 START TEST spdk_dd_malloc 00:39:05.010 ************************************ 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:39:05.010 * Looking for test storage... 00:39:05.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:39:05.010 ************************************ 00:39:05.010 START TEST dd_malloc_copy 00:39:05.010 ************************************ 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:05.010 11:32:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:05.010 { 00:39:05.010 "subsystems": [ 00:39:05.010 { 00:39:05.010 "subsystem": "bdev", 00:39:05.010 "config": [ 00:39:05.010 { 00:39:05.010 "params": { 00:39:05.010 "block_size": 512, 00:39:05.010 "name": "malloc0", 00:39:05.010 "num_blocks": 1048576 00:39:05.010 }, 00:39:05.010 "method": "bdev_malloc_create" 00:39:05.010 }, 00:39:05.010 { 00:39:05.010 "params": { 00:39:05.010 "block_size": 512, 00:39:05.010 "name": "malloc1", 00:39:05.010 "num_blocks": 1048576 00:39:05.010 }, 00:39:05.010 "method": "bdev_malloc_create" 00:39:05.010 }, 00:39:05.010 { 00:39:05.010 "method": "bdev_wait_for_examine" 00:39:05.010 } 00:39:05.010 ] 00:39:05.010 } 00:39:05.010 ] 00:39:05.010 } 00:39:05.010 [2024-05-15 11:32:23.570025] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:05.010 [2024-05-15 11:32:23.570199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79828 ] 00:39:05.268 [2024-05-15 11:32:23.727572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.526 [2024-05-15 11:32:23.945982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.274  Copying: 464/512 [MB] (464 MBps) Copying: 512/512 [MB] (average 462 MBps) 00:39:12.274 00:39:12.274 11:32:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:39:12.274 11:32:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:39:12.274 11:32:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:12.274 11:32:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:12.274 { 00:39:12.274 "subsystems": [ 00:39:12.274 { 00:39:12.274 "subsystem": "bdev", 00:39:12.274 "config": [ 00:39:12.274 { 00:39:12.274 "params": { 00:39:12.274 "block_size": 512, 00:39:12.274 "name": "malloc0", 00:39:12.274 "num_blocks": 1048576 00:39:12.274 }, 00:39:12.274 "method": "bdev_malloc_create" 00:39:12.274 }, 00:39:12.274 { 00:39:12.274 "params": { 00:39:12.274 "block_size": 512, 00:39:12.274 "name": "malloc1", 00:39:12.274 "num_blocks": 1048576 00:39:12.274 }, 00:39:12.274 "method": "bdev_malloc_create" 00:39:12.274 }, 00:39:12.274 { 00:39:12.274 "method": "bdev_wait_for_examine" 00:39:12.274 } 00:39:12.274 ] 00:39:12.274 } 00:39:12.274 ] 00:39:12.274 } 00:39:12.274 [2024-05-15 11:32:30.399918] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:12.274 [2024-05-15 11:32:30.400139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79910 ] 00:39:12.274 [2024-05-15 11:32:30.563057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.274 [2024-05-15 11:32:30.789683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:19.015  Copying: 464/512 [MB] (464 MBps) Copying: 512/512 [MB] (average 464 MBps) 00:39:19.015 00:39:19.015 ************************************ 00:39:19.015 END TEST dd_malloc_copy 00:39:19.015 ************************************ 00:39:19.015 00:39:19.015 real 0m13.725s 00:39:19.015 user 0m12.205s 00:39:19.015 sys 0m1.244s 00:39:19.015 11:32:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:19.015 11:32:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:19.015 ************************************ 00:39:19.015 END TEST spdk_dd_malloc 00:39:19.015 ************************************ 00:39:19.015 00:39:19.015 real 0m13.844s 00:39:19.015 user 0m12.256s 00:39:19.015 sys 0m1.312s 00:39:19.015 11:32:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:19.015 11:32:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:39:19.015 11:32:37 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:39:19.015 11:32:37 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:19.015 11:32:37 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:19.015 11:32:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:19.015 ************************************ 00:39:19.015 START TEST spdk_dd_bdev_to_bdev 00:39:19.015 ************************************ 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:39:19.015 * Looking for test storage... 00:39:19.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:39:19.015 11:32:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:39:19.015 [2024-05-15 11:32:37.483298] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:19.015 [2024-05-15 11:32:37.483466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80066 ] 00:39:19.015 [2024-05-15 11:32:37.636381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:19.273 [2024-05-15 11:32:37.868238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.215  Copying: 256/256 [MB] (average 1651 MBps) 00:39:21.215 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:21.215 ************************************ 00:39:21.215 START TEST dd_inflate_file 00:39:21.215 ************************************ 00:39:21.215 11:32:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:21.215 [2024-05-15 11:32:39.816353] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:21.215 [2024-05-15 11:32:39.816516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80101 ] 00:39:21.473 [2024-05-15 11:32:39.972772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.730 [2024-05-15 11:32:40.201630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.362  Copying: 64/64 [MB] (average 1600 MBps) 00:39:23.362 00:39:23.362 ************************************ 00:39:23.362 END TEST dd_inflate_file 00:39:23.362 ************************************ 00:39:23.362 00:39:23.362 real 0m2.190s 00:39:23.362 user 0m1.710s 00:39:23.362 sys 0m0.277s 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.362 ************************************ 00:39:23.362 START TEST dd_copy_to_out_bdev 00:39:23.362 ************************************ 00:39:23.362 11:32:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:23.362 { 00:39:23.362 "subsystems": [ 00:39:23.362 { 00:39:23.362 "subsystem": "bdev", 00:39:23.362 "config": [ 00:39:23.362 { 00:39:23.362 "params": { 00:39:23.362 "block_size": 4096, 00:39:23.362 "name": "aio1", 00:39:23.362 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:23.362 }, 00:39:23.362 "method": "bdev_aio_create" 00:39:23.362 }, 00:39:23.362 { 00:39:23.362 "params": { 00:39:23.362 "trtype": "pcie", 00:39:23.362 "name": "Nvme0", 00:39:23.362 "traddr": "0000:00:10.0" 00:39:23.362 }, 00:39:23.362 "method": "bdev_nvme_attach_controller" 00:39:23.362 }, 00:39:23.362 { 00:39:23.362 "method": "bdev_wait_for_examine" 00:39:23.362 } 00:39:23.362 ] 00:39:23.362 } 00:39:23.362 ] 00:39:23.362 } 00:39:23.620 [2024-05-15 11:32:42.062609] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:23.620 [2024-05-15 11:32:42.062781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80172 ] 00:39:23.620 [2024-05-15 11:32:42.234320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.877 [2024-05-15 11:32:42.454279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.671  Copying: 64/64 [MB] (average 85 MBps) 00:39:26.671 00:39:26.671 00:39:26.671 real 0m3.023s 00:39:26.671 user 0m2.584s 00:39:26.671 sys 0m0.300s 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:26.672 ************************************ 00:39:26.672 END TEST dd_copy_to_out_bdev 00:39:26.672 ************************************ 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:26.672 ************************************ 00:39:26.672 START TEST dd_offset_magic 00:39:26.672 ************************************ 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:26.672 11:32:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:26.672 { 00:39:26.672 "subsystems": [ 00:39:26.672 { 00:39:26.672 "subsystem": "bdev", 00:39:26.672 "config": [ 00:39:26.672 { 00:39:26.672 "params": { 00:39:26.672 "block_size": 4096, 00:39:26.672 "name": "aio1", 00:39:26.672 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:26.672 }, 00:39:26.672 "method": "bdev_aio_create" 00:39:26.672 }, 00:39:26.672 { 00:39:26.672 "params": { 00:39:26.672 "trtype": "pcie", 00:39:26.672 "name": "Nvme0", 00:39:26.672 "traddr": "0000:00:10.0" 00:39:26.672 }, 00:39:26.672 "method": "bdev_nvme_attach_controller" 00:39:26.672 }, 00:39:26.672 { 00:39:26.672 "method": "bdev_wait_for_examine" 00:39:26.672 } 00:39:26.672 ] 00:39:26.672 } 00:39:26.672 ] 00:39:26.672 } 00:39:26.672 [2024-05-15 11:32:45.154069] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:26.672 [2024-05-15 11:32:45.154271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80229 ] 00:39:26.929 [2024-05-15 11:32:45.317891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.187 [2024-05-15 11:32:45.592644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.126  Copying: 65/65 [MB] (average 232 MBps) 00:39:29.126 00:39:29.126 11:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:39:29.126 11:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:29.126 11:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:29.126 11:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:29.126 { 00:39:29.126 "subsystems": [ 00:39:29.126 { 00:39:29.126 "subsystem": "bdev", 00:39:29.126 "config": [ 00:39:29.126 { 00:39:29.126 "params": { 00:39:29.126 "block_size": 4096, 00:39:29.126 "name": "aio1", 00:39:29.126 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:29.126 }, 00:39:29.126 "method": "bdev_aio_create" 00:39:29.126 }, 00:39:29.126 { 00:39:29.126 "params": { 00:39:29.126 "trtype": "pcie", 00:39:29.126 "name": "Nvme0", 00:39:29.126 "traddr": "0000:00:10.0" 00:39:29.126 }, 00:39:29.126 "method": "bdev_nvme_attach_controller" 00:39:29.126 }, 00:39:29.126 { 00:39:29.126 "method": "bdev_wait_for_examine" 00:39:29.126 } 00:39:29.126 ] 00:39:29.126 } 00:39:29.126 ] 00:39:29.126 } 00:39:29.384 [2024-05-15 11:32:47.802377] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:29.384 [2024-05-15 11:32:47.802547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80267 ] 00:39:29.384 [2024-05-15 11:32:47.955736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.642 [2024-05-15 11:32:48.174587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.583  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:31.583 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:31.583 11:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:31.583 { 00:39:31.583 "subsystems": [ 00:39:31.583 { 00:39:31.583 "subsystem": "bdev", 00:39:31.583 "config": [ 00:39:31.583 { 00:39:31.583 "params": { 00:39:31.583 "block_size": 4096, 00:39:31.583 "name": "aio1", 00:39:31.583 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:31.583 }, 00:39:31.583 "method": "bdev_aio_create" 00:39:31.583 }, 00:39:31.583 { 00:39:31.583 "params": { 00:39:31.583 "trtype": "pcie", 00:39:31.583 "name": "Nvme0", 00:39:31.583 "traddr": "0000:00:10.0" 00:39:31.583 }, 00:39:31.583 "method": "bdev_nvme_attach_controller" 00:39:31.583 }, 00:39:31.583 { 00:39:31.583 "method": "bdev_wait_for_examine" 00:39:31.583 } 00:39:31.583 ] 00:39:31.583 } 00:39:31.583 ] 00:39:31.583 } 00:39:31.583 [2024-05-15 11:32:50.032665] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:31.583 [2024-05-15 11:32:50.033004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80296 ] 00:39:31.583 [2024-05-15 11:32:50.189577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.843 [2024-05-15 11:32:50.480862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.163  Copying: 65/65 [MB] (average 221 MBps) 00:39:34.163 00:39:34.163 11:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:39:34.163 11:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:34.163 11:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:34.163 11:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:34.163 { 00:39:34.163 "subsystems": [ 00:39:34.163 { 00:39:34.163 "subsystem": "bdev", 00:39:34.163 "config": [ 00:39:34.163 { 00:39:34.163 "params": { 00:39:34.163 "block_size": 4096, 00:39:34.163 "name": "aio1", 00:39:34.163 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:34.163 }, 00:39:34.163 "method": "bdev_aio_create" 00:39:34.163 }, 00:39:34.164 { 00:39:34.164 "params": { 00:39:34.164 "trtype": "pcie", 00:39:34.164 "name": "Nvme0", 00:39:34.164 "traddr": "0000:00:10.0" 00:39:34.164 }, 00:39:34.164 "method": "bdev_nvme_attach_controller" 00:39:34.164 }, 00:39:34.164 { 00:39:34.164 "method": "bdev_wait_for_examine" 00:39:34.164 } 00:39:34.164 ] 00:39:34.164 } 00:39:34.164 ] 00:39:34.164 } 00:39:34.164 [2024-05-15 11:32:52.725589] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:34.164 [2024-05-15 11:32:52.725760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80341 ] 00:39:34.422 [2024-05-15 11:32:52.880846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.681 [2024-05-15 11:32:53.104797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.311  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:36.311 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:36.311 00:39:36.311 real 0m9.848s 00:39:36.311 user 0m7.818s 00:39:36.311 sys 0m1.111s 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:36.311 ************************************ 00:39:36.311 END TEST dd_offset_magic 00:39:36.311 ************************************ 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:36.311 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:36.312 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:39:36.312 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:36.312 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:36.312 11:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:36.569 { 00:39:36.569 "subsystems": [ 00:39:36.569 { 00:39:36.569 "subsystem": "bdev", 00:39:36.569 "config": [ 00:39:36.569 { 00:39:36.569 "params": { 00:39:36.569 "block_size": 4096, 00:39:36.569 "name": "aio1", 00:39:36.569 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:36.569 }, 00:39:36.569 "method": "bdev_aio_create" 00:39:36.569 }, 00:39:36.569 { 00:39:36.569 "params": { 00:39:36.569 "trtype": "pcie", 00:39:36.569 "name": "Nvme0", 00:39:36.569 "traddr": "0000:00:10.0" 00:39:36.569 }, 00:39:36.569 "method": "bdev_nvme_attach_controller" 00:39:36.569 }, 00:39:36.569 { 00:39:36.569 "method": "bdev_wait_for_examine" 00:39:36.569 } 00:39:36.569 ] 00:39:36.569 } 00:39:36.569 ] 00:39:36.569 } 00:39:36.569 [2024-05-15 11:32:55.027927] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:36.569 [2024-05-15 11:32:55.028148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80390 ] 00:39:36.569 [2024-05-15 11:32:55.199125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.827 [2024-05-15 11:32:55.455198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.764  Copying: 5120/5120 [kB] (average 1250 MBps) 00:39:38.764 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:38.764 11:32:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:38.764 { 00:39:38.764 "subsystems": [ 00:39:38.764 { 00:39:38.764 "subsystem": "bdev", 00:39:38.764 "config": [ 00:39:38.764 { 00:39:38.764 "params": { 00:39:38.764 "block_size": 4096, 00:39:38.764 "name": "aio1", 00:39:38.764 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:39:38.764 }, 00:39:38.764 "method": "bdev_aio_create" 00:39:38.764 }, 00:39:38.764 { 00:39:38.764 "params": { 00:39:38.764 "trtype": "pcie", 00:39:38.764 "name": "Nvme0", 00:39:38.764 "traddr": "0000:00:10.0" 00:39:38.764 }, 00:39:38.764 "method": "bdev_nvme_attach_controller" 00:39:38.764 }, 00:39:38.764 { 00:39:38.764 "method": "bdev_wait_for_examine" 00:39:38.764 } 00:39:38.764 ] 00:39:38.764 } 00:39:38.764 ] 00:39:38.764 } 00:39:38.764 [2024-05-15 11:32:57.326784] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:38.764 [2024-05-15 11:32:57.327143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80431 ] 00:39:39.022 [2024-05-15 11:32:57.487019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.280 [2024-05-15 11:32:57.705175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.780  Copying: 5120/5120 [kB] (average 172 MBps) 00:39:40.780 00:39:40.780 11:32:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:39:41.038 00:39:41.038 real 0m22.217s 00:39:41.038 user 0m17.679s 00:39:41.038 sys 0m2.765s 00:39:41.038 11:32:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:41.038 11:32:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:41.038 ************************************ 00:39:41.038 END TEST spdk_dd_bdev_to_bdev 00:39:41.038 ************************************ 00:39:41.038 11:32:59 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:39:41.038 11:32:59 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:41.038 11:32:59 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:41.038 11:32:59 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:41.038 11:32:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:41.038 ************************************ 00:39:41.038 START TEST spdk_dd_sparse 00:39:41.038 ************************************ 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:41.038 * Looking for test storage... 00:39:41.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:41.038 11:32:59 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:39:41.039 1+0 records in 00:39:41.039 1+0 records out 00:39:41.039 4194304 bytes (4.2 MB) copied, 0.00528388 s, 794 MB/s 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:39:41.039 1+0 records in 00:39:41.039 1+0 records out 00:39:41.039 4194304 bytes (4.2 MB) copied, 0.00468132 s, 896 MB/s 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:39:41.039 1+0 records in 00:39:41.039 1+0 records out 00:39:41.039 4194304 bytes (4.2 MB) copied, 0.00521159 s, 805 MB/s 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:41.039 ************************************ 00:39:41.039 START TEST dd_sparse_file_to_file 00:39:41.039 ************************************ 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:41.039 11:32:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:41.295 { 00:39:41.295 "subsystems": [ 00:39:41.295 { 00:39:41.295 "subsystem": "bdev", 00:39:41.295 "config": [ 00:39:41.295 { 00:39:41.295 "params": { 00:39:41.295 "block_size": 4096, 00:39:41.295 "name": "dd_aio", 00:39:41.295 "filename": "dd_sparse_aio_disk" 00:39:41.295 }, 00:39:41.295 "method": "bdev_aio_create" 00:39:41.295 }, 00:39:41.295 { 00:39:41.295 "params": { 00:39:41.295 "bdev_name": "dd_aio", 00:39:41.295 "lvs_name": "dd_lvstore" 00:39:41.295 }, 00:39:41.295 "method": "bdev_lvol_create_lvstore" 00:39:41.295 }, 00:39:41.295 { 00:39:41.295 "method": "bdev_wait_for_examine" 00:39:41.295 } 00:39:41.295 ] 00:39:41.295 } 00:39:41.295 ] 00:39:41.295 } 00:39:41.295 [2024-05-15 11:32:59.812288] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:41.295 [2024-05-15 11:32:59.812481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80532 ] 00:39:41.554 [2024-05-15 11:32:59.962688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.554 [2024-05-15 11:33:00.178421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.520  Copying: 12/36 [MB] (average 1200 MBps) 00:39:43.520 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:43.520 00:39:43.520 real 0m2.387s 00:39:43.520 user 0m1.927s 00:39:43.520 sys 0m0.293s 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:43.520 ************************************ 00:39:43.520 END TEST dd_sparse_file_to_file 00:39:43.520 ************************************ 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:43.520 ************************************ 00:39:43.520 START TEST dd_sparse_file_to_bdev 00:39:43.520 ************************************ 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size_in_mib"]=36 ["thin_provision"]=true) 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:43.520 11:33:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:43.779 { 00:39:43.779 "subsystems": [ 00:39:43.779 { 00:39:43.779 "subsystem": "bdev", 00:39:43.779 "config": [ 00:39:43.779 { 00:39:43.779 "params": { 00:39:43.779 "block_size": 4096, 00:39:43.779 "name": "dd_aio", 00:39:43.779 "filename": "dd_sparse_aio_disk" 00:39:43.779 }, 00:39:43.779 "method": "bdev_aio_create" 00:39:43.779 }, 00:39:43.779 { 00:39:43.779 "params": { 00:39:43.779 "size_in_mib": 36, 00:39:43.779 "thin_provision": true, 00:39:43.779 "lvol_name": "dd_lvol", 00:39:43.779 "lvs_name": "dd_lvstore" 00:39:43.779 }, 00:39:43.779 "method": "bdev_lvol_create" 00:39:43.779 }, 00:39:43.779 { 00:39:43.779 "method": "bdev_wait_for_examine" 00:39:43.779 } 00:39:43.779 ] 00:39:43.779 } 00:39:43.779 ] 00:39:43.779 } 00:39:43.779 [2024-05-15 11:33:02.228954] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:43.779 [2024-05-15 11:33:02.229173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80593 ] 00:39:43.779 [2024-05-15 11:33:02.382588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.039 [2024-05-15 11:33:02.601956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.982  Copying: 12/36 [MB] (average 125 MBps) 00:39:45.982 00:39:45.982 00:39:45.982 real 0m2.332s 00:39:45.982 user 0m1.934s 00:39:45.982 sys 0m0.255s 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:45.982 ************************************ 00:39:45.982 END TEST dd_sparse_file_to_bdev 00:39:45.982 ************************************ 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:45.982 ************************************ 00:39:45.982 START TEST dd_sparse_bdev_to_file 00:39:45.982 ************************************ 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:45.982 11:33:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:45.982 { 00:39:45.982 "subsystems": [ 00:39:45.982 { 00:39:45.982 "subsystem": "bdev", 00:39:45.982 "config": [ 00:39:45.982 { 00:39:45.982 "params": { 00:39:45.982 "block_size": 4096, 00:39:45.982 "name": "dd_aio", 00:39:45.982 "filename": "dd_sparse_aio_disk" 00:39:45.982 }, 00:39:45.982 "method": "bdev_aio_create" 00:39:45.982 }, 00:39:45.982 { 00:39:45.982 "method": "bdev_wait_for_examine" 00:39:45.982 } 00:39:45.982 ] 00:39:45.982 } 00:39:45.982 ] 00:39:45.982 } 00:39:45.982 [2024-05-15 11:33:04.614240] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:45.982 [2024-05-15 11:33:04.614422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80660 ] 00:39:46.241 [2024-05-15 11:33:04.761278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.500 [2024-05-15 11:33:04.974355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.136  Copying: 12/36 [MB] (average 1200 MBps) 00:39:48.136 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:48.136 ************************************ 00:39:48.136 END TEST dd_sparse_bdev_to_file 00:39:48.136 ************************************ 00:39:48.136 00:39:48.136 real 0m2.241s 00:39:48.136 user 0m1.824s 00:39:48.136 sys 0m0.274s 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:39:48.136 00:39:48.136 real 0m7.258s 00:39:48.136 user 0m5.781s 00:39:48.136 sys 0m1.000s 00:39:48.136 ************************************ 00:39:48.136 END TEST spdk_dd_sparse 00:39:48.136 ************************************ 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.136 11:33:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:48.394 11:33:06 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:48.395 11:33:06 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.395 11:33:06 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.395 11:33:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:48.395 ************************************ 00:39:48.395 START TEST spdk_dd_negative 00:39:48.395 ************************************ 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:48.395 * Looking for test storage... 00:39:48.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:48.395 ************************************ 00:39:48.395 START TEST dd_invalid_arguments 00:39:48.395 ************************************ 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:48.395 11:33:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:48.654 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:39:48.654 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:39:48.654 00:39:48.654 CPU options: 00:39:48.654 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:39:48.654 (like [0,1,10]) 00:39:48.654 --lcores lcore to CPU mapping list. The list is in the format: 00:39:48.654 [<,lcores[@CPUs]>...] 00:39:48.654 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:39:48.654 Within the group, '-' is used for range separator, 00:39:48.654 ',' is used for single number separator. 00:39:48.654 '( )' can be omitted for single element group, 00:39:48.654 '@' can be omitted if cpus and lcores have the same value 00:39:48.654 --disable-cpumask-locks Disable CPU core lock files. 00:39:48.654 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:39:48.654 pollers in the app support interrupt mode) 00:39:48.654 -p, --main-core main (primary) core for DPDK 00:39:48.654 00:39:48.654 Configuration options: 00:39:48.654 -c, --config, --json JSON config file 00:39:48.654 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:39:48.654 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:39:48.654 --wait-for-rpc wait for RPCs to initialize subsystems 00:39:48.654 --rpcs-allowed comma-separated list of permitted RPCS 00:39:48.654 --json-ignore-init-errors don't exit on invalid config entry 00:39:48.654 00:39:48.654 Memory options: 00:39:48.654 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:39:48.654 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:39:48.654 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:39:48.654 -R, --huge-unlink unlink huge files after initialization 00:39:48.654 -n, --mem-channels number of memory channels used for DPDK 00:39:48.654 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:39:48.654 --msg-mempool-size global message memory pool size in count (default: 262143) 00:39:48.654 --no-huge run without using hugepages 00:39:48.654 -i, --shm-id shared memory ID (optional) 00:39:48.654 -g, --single-file-segments force creating just one hugetlbfs file 00:39:48.654 00:39:48.654 PCI options: 00:39:48.654 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:39:48.654 -B, --pci-blocked pci addr to block (can be used more than once) 00:39:48.654 -u, --no-pci disable PCI access 00:39:48.654 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:39:48.654 00:39:48.654 Log options: 00:39:48.654 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:39:48.654 app_config, app_rpc, bdev, bdev_concat, bdev_daos, bdev_ftl, 00:39:48.654 bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, 00:39:48.654 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:39:48.654 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:39:48.654 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:39:48.654 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:39:48.654 thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:39:48.654 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:39:48.654 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:39:48.654 virtio_vfio_user, vmd) 00:39:48.654 --silence-noticelog disable notice level logging to stderr 00:39:48.654 00:39:48.654 Trace options: 00:39:48.654 --num-trace-entries number of trace entries for each core, must be power of 2, 00:39:48.654 setting 0 to disable trace (default 32768) 00:39:48.654 Tracepoints vary in size and can use more than one trace entry. 00:39:48.654 -e, --tpoint-group [:] 00:39:48.654 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:39:48.654 blobf[2024-05-15 11:33:07.066224] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:39:48.654 s, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:39:48.654 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:39:48.654 a tracepoint group. First tpoint inside a group can be enabled by 00:39:48.654 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:39:48.654 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:39:48.654 in /include/spdk_internal/trace_defs.h 00:39:48.654 00:39:48.654 Other options: 00:39:48.654 -h, --help show this usage 00:39:48.654 -v, --version print SPDK version 00:39:48.654 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:39:48.654 --env-context Opaque context for use of the env implementation 00:39:48.654 00:39:48.654 Application specific: 00:39:48.654 [--------- DD Options ---------] 00:39:48.654 --if Input file. Must specify either --if or --ib. 00:39:48.654 --ib Input bdev. Must specifier either --if or --ib 00:39:48.654 --of Output file. Must specify either --of or --ob. 00:39:48.654 --ob Output bdev. Must specify either --of or --ob. 00:39:48.654 --iflag Input file flags. 00:39:48.654 --oflag Output file flags. 00:39:48.654 --bs I/O unit size (default: 4096) 00:39:48.654 --qd Queue depth (default: 2) 00:39:48.655 --count I/O unit count. The number of I/O units to copy. (default: all) 00:39:48.655 --skip Skip this many I/O units at start of input. (default: 0) 00:39:48.655 --seek Skip this many I/O units at start of output. (default: 0) 00:39:48.655 --aio Force usage of AIO. (by default io_uring is used if available) 00:39:48.655 --sparse Enable hole skipping in input target 00:39:48.655 Available iflag and oflag values: 00:39:48.655 append - append mode 00:39:48.655 direct - use direct I/O for data 00:39:48.655 directory - fail unless a directory 00:39:48.655 dsync - use synchronized I/O for data 00:39:48.655 noatime - do not update access time 00:39:48.655 noctty - do not assign controlling terminal from file 00:39:48.655 nofollow - do not follow symlinks 00:39:48.655 nonblock - use non-blocking I/O 00:39:48.655 sync - use synchronized I/O for data and metadata 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:48.655 00:39:48.655 real 0m0.167s 00:39:48.655 user 0m0.038s 00:39:48.655 sys 0m0.034s 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.655 ************************************ 00:39:48.655 END TEST dd_invalid_arguments 00:39:48.655 ************************************ 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:48.655 ************************************ 00:39:48.655 START TEST dd_double_input 00:39:48.655 ************************************ 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:48.655 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:48.655 [2024-05-15 11:33:07.281277] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:48.914 00:39:48.914 real 0m0.161s 00:39:48.914 user 0m0.035s 00:39:48.914 sys 0m0.031s 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:39:48.914 ************************************ 00:39:48.914 END TEST dd_double_input 00:39:48.914 ************************************ 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:48.914 ************************************ 00:39:48.914 START TEST dd_double_output 00:39:48.914 ************************************ 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:48.914 [2024-05-15 11:33:07.492995] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:48.914 00:39:48.914 real 0m0.166s 00:39:48.914 user 0m0.036s 00:39:48.914 sys 0m0.035s 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:39:48.914 ************************************ 00:39:48.914 END TEST dd_double_output 00:39:48.914 ************************************ 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.914 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:49.173 ************************************ 00:39:49.173 START TEST dd_no_input 00:39:49.173 ************************************ 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:49.173 [2024-05-15 11:33:07.699893] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:49.173 ************************************ 00:39:49.173 END TEST dd_no_input 00:39:49.173 ************************************ 00:39:49.173 00:39:49.173 real 0m0.163s 00:39:49.173 user 0m0.036s 00:39:49.173 sys 0m0.031s 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:49.173 ************************************ 00:39:49.173 START TEST dd_no_output 00:39:49.173 ************************************ 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:49.173 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:49.431 [2024-05-15 11:33:07.924491] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:49.431 00:39:49.431 real 0m0.178s 00:39:49.431 user 0m0.039s 00:39:49.431 sys 0m0.045s 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:39:49.431 ************************************ 00:39:49.431 END TEST dd_no_output 00:39:49.431 ************************************ 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:49.431 ************************************ 00:39:49.431 START TEST dd_wrong_blocksize 00:39:49.431 ************************************ 00:39:49.431 11:33:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:49.432 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:49.690 [2024-05-15 11:33:08.147310] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:49.690 00:39:49.690 real 0m0.172s 00:39:49.690 user 0m0.040s 00:39:49.690 sys 0m0.037s 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:49.690 ************************************ 00:39:49.690 END TEST dd_wrong_blocksize 00:39:49.690 ************************************ 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:49.690 ************************************ 00:39:49.690 START TEST dd_smaller_blocksize 00:39:49.690 ************************************ 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:49.690 11:33:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:49.948 [2024-05-15 11:33:08.366360] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:49.948 [2024-05-15 11:33:08.366562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80943 ] 00:39:49.948 [2024-05-15 11:33:08.530270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.207 [2024-05-15 11:33:08.782630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.773 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:39:50.773 [2024-05-15 11:33:09.349698] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:39:50.773 [2024-05-15 11:33:09.349779] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:51.707 [2024-05-15 11:33:10.201223] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:51.966 ************************************ 00:39:51.966 END TEST dd_smaller_blocksize 00:39:51.966 ************************************ 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:51.966 00:39:51.966 real 0m2.367s 00:39:51.966 user 0m1.761s 00:39:51.966 sys 0m0.410s 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:51.966 11:33:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:52.225 ************************************ 00:39:52.225 START TEST dd_invalid_count 00:39:52.225 ************************************ 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:52.225 [2024-05-15 11:33:10.783797] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:39:52.225 ************************************ 00:39:52.225 END TEST dd_invalid_count 00:39:52.225 ************************************ 00:39:52.225 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:52.226 00:39:52.226 real 0m0.183s 00:39:52.226 user 0m0.042s 00:39:52.226 sys 0m0.045s 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.226 11:33:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:52.484 ************************************ 00:39:52.484 START TEST dd_invalid_oflag 00:39:52.484 ************************************ 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:52.484 11:33:10 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:52.484 [2024-05-15 11:33:11.012215] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:52.484 00:39:52.484 real 0m0.177s 00:39:52.484 user 0m0.047s 00:39:52.484 sys 0m0.035s 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.484 ************************************ 00:39:52.484 END TEST dd_invalid_oflag 00:39:52.484 ************************************ 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:52.484 ************************************ 00:39:52.484 START TEST dd_invalid_iflag 00:39:52.484 ************************************ 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:52.484 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:52.744 [2024-05-15 11:33:11.232721] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:39:52.744 ************************************ 00:39:52.744 END TEST dd_invalid_iflag 00:39:52.744 ************************************ 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:52.744 00:39:52.744 real 0m0.175s 00:39:52.744 user 0m0.042s 00:39:52.744 sys 0m0.038s 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:52.744 ************************************ 00:39:52.744 START TEST dd_unknown_flag 00:39:52.744 ************************************ 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:52.744 11:33:11 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:53.003 [2024-05-15 11:33:11.462632] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:53.003 [2024-05-15 11:33:11.463288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81071 ] 00:39:53.003 [2024-05-15 11:33:11.632270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.262 [2024-05-15 11:33:11.893289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.828  Copying: 0/0 [B] (average 0 Bps)[2024-05-15 11:33:12.261353] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:39:53.828 [2024-05-15 11:33:12.261457] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:53.828 [2024-05-15 11:33:12.261620] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:39:54.765 [2024-05-15 11:33:13.116586] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:55.066 00:39:55.066 00:39:55.066 ************************************ 00:39:55.066 END TEST dd_unknown_flag 00:39:55.066 ************************************ 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:55.066 00:39:55.066 real 0m2.205s 00:39:55.066 user 0m1.754s 00:39:55.066 sys 0m0.250s 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:55.066 ************************************ 00:39:55.066 START TEST dd_invalid_json 00:39:55.066 ************************************ 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:55.066 11:33:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:55.325 [2024-05-15 11:33:13.710494] Starting SPDK v24.05-pre git sha1 b7a2519d9 / DPDK 23.11.0 initialization... 00:39:55.325 [2024-05-15 11:33:13.710664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81135 ] 00:39:55.325 [2024-05-15 11:33:13.871674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.584 [2024-05-15 11:33:14.099031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.584 [2024-05-15 11:33:14.099171] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:39:55.584 [2024-05-15 11:33:14.099236] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:55.584 [2024-05-15 11:33:14.099258] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:55.584 [2024-05-15 11:33:14.099335] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:55.856 ************************************ 00:39:55.856 END TEST dd_invalid_json 00:39:55.856 ************************************ 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:55.856 00:39:55.856 real 0m0.921s 00:39:55.856 user 0m0.597s 00:39:55.856 sys 0m0.129s 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:55.856 11:33:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 00:39:56.115 real 0m7.701s 00:39:56.115 user 0m4.679s 00:39:56.115 sys 0m1.533s 00:39:56.115 11:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:56.115 11:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 ************************************ 00:39:56.115 END TEST spdk_dd_negative 00:39:56.115 ************************************ 00:39:56.115 ************************************ 00:39:56.115 END TEST spdk_dd 00:39:56.115 ************************************ 00:39:56.115 00:39:56.115 real 2m57.116s 00:39:56.115 user 2m20.262s 00:39:56.115 sys 0m21.213s 00:39:56.115 11:33:14 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:56.115 11:33:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 11:33:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@256 -- # timing_exit lib 00:39:56.115 11:33:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.115 11:33:14 -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 11:33:14 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:56.115 11:33:14 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:39:56.115 11:33:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:56.115 11:33:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:56.115 11:33:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:56.115 11:33:14 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:39:56.115 11:33:14 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:39:56.115 11:33:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:56.115 11:33:14 -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 11:33:14 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:39:56.115 11:33:14 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:39:56.115 11:33:14 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:39:56.115 11:33:14 -- common/autotest_common.sh@10 -- # set +x 00:39:57.053 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:39:57.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:39:57.053 Waiting for block devices as requested 00:39:57.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:57.311 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:39:57.311 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:39:57.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:39:57.589 Cleaning 00:39:57.589 Removing: /var/run/dpdk/spdk0/config 00:39:57.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:57.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:57.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:57.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:57.589 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:57.589 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:57.589 Removing: /dev/shm/spdk_tgt_trace.pid46154 00:39:57.589 Removing: /var/run/dpdk/spdk0 00:39:57.589 Removing: /var/run/dpdk/spdk_pid45898 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46154 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46416 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46538 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46601 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46748 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46777 00:39:57.589 Removing: /var/run/dpdk/spdk_pid46955 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47216 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47417 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47529 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47652 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47788 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47911 00:39:57.589 Removing: /var/run/dpdk/spdk_pid47964 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48007 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48096 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48240 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48327 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48418 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48443 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48619 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48640 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48824 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48845 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48926 00:39:57.589 Removing: /var/run/dpdk/spdk_pid48956 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49028 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49051 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49267 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49312 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49360 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49450 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49548 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49600 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49691 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49747 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49808 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49866 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49924 00:39:57.589 Removing: /var/run/dpdk/spdk_pid49987 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50040 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50103 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50161 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50212 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50277 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50335 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50391 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50449 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50556 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50700 00:39:57.589 Removing: /var/run/dpdk/spdk_pid50911 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51002 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51077 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51220 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51450 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51667 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51800 00:39:57.589 Removing: /var/run/dpdk/spdk_pid51946 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52020 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52058 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52096 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52577 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52675 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52799 00:39:57.589 Removing: /var/run/dpdk/spdk_pid52862 00:39:57.589 Removing: /var/run/dpdk/spdk_pid53899 00:39:57.589 Removing: /var/run/dpdk/spdk_pid55048 00:39:57.589 Removing: /var/run/dpdk/spdk_pid56190 00:39:57.589 Removing: /var/run/dpdk/spdk_pid58687 00:39:57.589 Removing: /var/run/dpdk/spdk_pid61173 00:39:57.589 Removing: /var/run/dpdk/spdk_pid63681 00:39:57.589 Removing: /var/run/dpdk/spdk_pid66715 00:39:57.589 Removing: /var/run/dpdk/spdk_pid69496 00:39:57.589 Removing: /var/run/dpdk/spdk_pid72297 00:39:57.589 Removing: /var/run/dpdk/spdk_pid73591 00:39:57.589 Removing: /var/run/dpdk/spdk_pid74449 00:39:57.589 Removing: /var/run/dpdk/spdk_pid75311 00:39:57.589 Removing: /var/run/dpdk/spdk_pid75788 00:39:57.589 Removing: /var/run/dpdk/spdk_pid76665 00:39:57.589 Removing: /var/run/dpdk/spdk_pid76729 00:39:57.589 Removing: /var/run/dpdk/spdk_pid76787 00:39:57.589 Removing: /var/run/dpdk/spdk_pid76850 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77006 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77156 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77394 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77642 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77676 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77734 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77770 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77799 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77848 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77875 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77908 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77947 00:39:57.589 Removing: /var/run/dpdk/spdk_pid77979 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78015 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78055 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78093 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78122 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78165 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78197 00:39:57.589 Removing: /var/run/dpdk/spdk_pid78230 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78269 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78304 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78340 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78392 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78428 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78477 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78573 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78626 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78658 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78704 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78741 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78767 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78832 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78862 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78916 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78949 00:39:57.847 Removing: /var/run/dpdk/spdk_pid78977 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79002 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79031 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79070 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79095 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79124 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79175 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79226 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79258 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79304 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79337 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79366 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79439 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79465 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79513 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79545 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79570 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79602 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79627 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79656 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79680 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79711 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79828 00:39:57.847 Removing: /var/run/dpdk/spdk_pid79910 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80066 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80101 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80172 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80229 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80267 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80296 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80341 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80390 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80431 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80532 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80593 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80660 00:39:57.847 Removing: /var/run/dpdk/spdk_pid80943 00:39:57.847 Removing: /var/run/dpdk/spdk_pid81071 00:39:57.847 Removing: /var/run/dpdk/spdk_pid81135 00:39:57.847 Clean 00:39:57.847 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:39:57.847 11:33:16 -- common/autotest_common.sh@1447 -- # return 0 00:39:57.847 11:33:16 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:39:57.847 11:33:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:57.847 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:39:57.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 678: 37065 Terminated ${SUDO[MONITOR_RESOURCES_SUDO["$monitor"]]} "$_pmdir/$monitor" -d "$PM_OUTPUTDIR" -l -p "monitor.${0##*/}.$(date +%s)" (wd: /home/vagrant/spdk_repo) 00:39:57.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 678: 37067 Terminated ${SUDO[MONITOR_RESOURCES_SUDO["$monitor"]]} "$_pmdir/$monitor" -d "$PM_OUTPUTDIR" -l -p "monitor.${0##*/}.$(date +%s)" (wd: /home/vagrant/spdk_repo) 00:39:57.847 11:33:16 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:39:57.847 11:33:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:57.847 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:39:57.847 11:33:16 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:57.847 11:33:16 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:39:57.847 11:33:16 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:39:57.847 11:33:16 -- spdk/autotest.sh@387 -- # hash lcov 00:39:57.847 11:33:16 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:57.847 11:33:16 -- spdk/autotest.sh@389 -- # hostname 00:39:57.847 11:33:16 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:39:58.104 geninfo: WARNING: invalid characters removed from testname! 00:41:05.781 11:34:13 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:05.781 11:34:19 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:05.781 11:34:23 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:08.315 11:34:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:11.596 11:34:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:15.784 11:34:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:19.065 11:34:37 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:19.065 11:34:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:19.065 11:34:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:19.065 11:34:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.065 11:34:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.065 11:34:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:41:19.065 11:34:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:41:19.065 11:34:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:41:19.065 11:34:37 -- paths/export.sh@5 -- $ export PATH 00:41:19.065 11:34:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:41:19.065 11:34:37 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:41:19.065 11:34:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:41:19.065 11:34:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715772877.XXXXXX 00:41:19.065 11:34:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715772877.DhAvlN 00:41:19.065 11:34:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:41:19.065 11:34:37 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:41:19.065 11:34:37 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:41:19.065 11:34:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:41:19.065 11:34:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:41:19.065 11:34:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:41:19.065 11:34:37 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:41:19.065 11:34:37 -- common/autotest_common.sh@10 -- $ set +x 00:41:19.065 11:34:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:41:19.065 11:34:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:41:19.065 11:34:37 -- pm/common@17 -- $ local monitor 00:41:19.065 11:34:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:19.065 11:34:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:19.065 11:34:37 -- pm/common@25 -- $ sleep 1 00:41:19.065 11:34:37 -- pm/common@21 -- $ date +%s 00:41:19.065 11:34:37 -- pm/common@21 -- $ date +%s 00:41:19.065 11:34:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715772877 00:41:19.065 11:34:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715772877 00:41:19.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715772877_collect-vmstat.pm.log 00:41:19.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715772877_collect-cpu-load.pm.log 00:41:19.632 11:34:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:41:19.632 11:34:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:41:19.632 11:34:38 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:41:19.632 11:34:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:19.632 11:34:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:19.632 11:34:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:19.632 11:34:38 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:19.632 11:34:38 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:19.632 11:34:38 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:19.890 11:34:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:19.890 11:34:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:19.890 11:34:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:19.890 11:34:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:19.890 11:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:19.890 11:34:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:41:19.890 11:34:38 -- pm/common@44 -- $ pid=82485 00:41:19.890 11:34:38 -- pm/common@50 -- $ kill -TERM 82485 00:41:19.890 11:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:19.890 11:34:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:41:19.890 11:34:38 -- pm/common@44 -- $ pid=82486 00:41:19.890 11:34:38 -- pm/common@50 -- $ kill -TERM 82486 00:41:19.890 + [[ -n 2827 ]] 00:41:19.890 + sudo kill 2827 00:41:19.890 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:41:19.898 [Pipeline] } 00:41:19.915 [Pipeline] // timeout 00:41:19.922 [Pipeline] } 00:41:19.940 [Pipeline] // stage 00:41:19.945 [Pipeline] } 00:41:19.964 [Pipeline] // catchError 00:41:19.973 [Pipeline] stage 00:41:19.975 [Pipeline] { (Stop VM) 00:41:19.989 [Pipeline] sh 00:41:20.261 + vagrant halt 00:41:24.443 ==> default: Halting domain... 00:41:31.012 [Pipeline] sh 00:41:31.289 + vagrant destroy -f 00:41:35.477 ==> default: Removing domain... 00:41:35.488 [Pipeline] sh 00:41:35.765 + mv output /var/jenkins/workspace/centos7-vg-autotest/output 00:41:35.775 [Pipeline] } 00:41:35.797 [Pipeline] // stage 00:41:35.803 [Pipeline] } 00:41:35.820 [Pipeline] // dir 00:41:35.826 [Pipeline] } 00:41:35.844 [Pipeline] // wrap 00:41:35.850 [Pipeline] } 00:41:35.866 [Pipeline] // catchError 00:41:35.875 [Pipeline] stage 00:41:35.877 [Pipeline] { (Epilogue) 00:41:35.891 [Pipeline] sh 00:41:36.168 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:58.107 [Pipeline] catchError 00:41:58.109 [Pipeline] { 00:41:58.125 [Pipeline] sh 00:41:58.403 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:58.403 Artifacts sizes are good 00:41:58.413 [Pipeline] } 00:41:58.432 [Pipeline] // catchError 00:41:58.444 [Pipeline] archiveArtifacts 00:41:58.451 Archiving artifacts 00:41:58.817 [Pipeline] cleanWs 00:41:58.828 [WS-CLEANUP] Deleting project workspace... 00:41:58.828 [WS-CLEANUP] Deferred wipeout is used... 00:41:58.833 [WS-CLEANUP] done 00:41:58.836 [Pipeline] } 00:41:58.856 [Pipeline] // stage 00:41:58.864 [Pipeline] } 00:41:58.883 [Pipeline] // node 00:41:58.889 [Pipeline] End of Pipeline 00:41:58.929 Finished: SUCCESS